Dec 12 17:26:08.257840 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:26:08.257864 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:26:08.257872 kernel: KASLR enabled Dec 12 17:26:08.257878 kernel: efi: EFI v2.7 by EDK II Dec 12 17:26:08.257884 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:26:08.257890 kernel: random: crng init done Dec 12 17:26:08.257897 kernel: secureboot: Secure boot disabled Dec 12 17:26:08.257904 kernel: ACPI: Early table checksum verification disabled Dec 12 17:26:08.257911 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:26:08.257918 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:26:08.257924 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257938 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257944 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257951 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257960 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257967 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257974 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257980 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257987 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:26:08.257993 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:26:08.258000 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:26:08.258007 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:26:08.258015 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:26:08.258021 kernel: Zone ranges: Dec 12 17:26:08.258028 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:26:08.258035 kernel: DMA32 empty Dec 12 17:26:08.258041 kernel: Normal empty Dec 12 17:26:08.258050 kernel: Device empty Dec 12 17:26:08.258057 kernel: Movable zone start for each node Dec 12 17:26:08.258063 kernel: Early memory node ranges Dec 12 17:26:08.258070 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:26:08.258076 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:26:08.258083 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:26:08.258092 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:26:08.258102 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:26:08.258111 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:26:08.258127 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:26:08.258134 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:26:08.258141 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:26:08.258147 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:26:08.258159 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:26:08.258166 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:26:08.258173 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:26:08.258181 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:26:08.258190 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:26:08.258198 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:26:08.258205 kernel: psci: probing for conduit method from ACPI. Dec 12 17:26:08.258212 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:26:08.258220 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:26:08.258227 kernel: psci: Trusted OS migration not required Dec 12 17:26:08.258235 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:26:08.258251 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:26:08.258259 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:26:08.258267 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:26:08.258274 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:26:08.258281 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:26:08.258291 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:26:08.258302 kernel: CPU features: detected: Spectre-v4 Dec 12 17:26:08.258309 kernel: CPU features: detected: Spectre-BHB Dec 12 17:26:08.258318 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:26:08.258325 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:26:08.258332 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:26:08.258339 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:26:08.258346 kernel: alternatives: applying boot alternatives Dec 12 17:26:08.258354 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:26:08.258361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:26:08.258368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:26:08.258375 kernel: Fallback order for Node 0: 0 Dec 12 17:26:08.258382 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:26:08.258391 kernel: Policy zone: DMA Dec 12 17:26:08.258398 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:26:08.258404 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:26:08.258411 kernel: software IO TLB: area num 4. Dec 12 17:26:08.258418 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:26:08.258437 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:26:08.258444 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:26:08.258451 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:26:08.258458 kernel: rcu: RCU event tracing is enabled. Dec 12 17:26:08.258466 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:26:08.258474 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:26:08.258483 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:26:08.258490 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:26:08.258497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:26:08.258504 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:26:08.258511 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:26:08.258518 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:26:08.258525 kernel: GICv3: 256 SPIs implemented Dec 12 17:26:08.258532 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:26:08.258538 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:26:08.258545 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:26:08.258552 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:26:08.258560 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:26:08.258567 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:26:08.258574 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:26:08.258581 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:26:08.258588 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:26:08.258595 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:26:08.258602 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:26:08.258609 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:26:08.258616 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:26:08.258623 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:26:08.258631 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:26:08.258639 kernel: arm-pv: using stolen time PV Dec 12 17:26:08.258647 kernel: Console: colour dummy device 80x25 Dec 12 17:26:08.258654 kernel: ACPI: Core revision 20240827 Dec 12 17:26:08.258661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:26:08.258669 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:26:08.258676 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:26:08.258684 kernel: landlock: Up and running. Dec 12 17:26:08.258691 kernel: SELinux: Initializing. Dec 12 17:26:08.258699 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.258707 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.258714 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:26:08.258721 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:26:08.258729 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:26:08.258755 kernel: Remapping and enabling EFI services. Dec 12 17:26:08.258762 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:26:08.258771 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:26:08.258782 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:26:08.258791 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:26:08.258799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:26:08.258806 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:26:08.258813 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:26:08.258821 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:26:08.258830 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:26:08.258838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:26:08.258845 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:26:08.258853 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:26:08.258861 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:26:08.258870 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:26:08.258877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:26:08.258886 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:26:08.258893 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:26:08.258901 kernel: SMP: Total of 4 processors activated. Dec 12 17:26:08.258908 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:26:08.258916 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:26:08.258923 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:26:08.258931 kernel: CPU features: detected: Common not Private translations Dec 12 17:26:08.258940 kernel: CPU features: detected: CRC32 instructions Dec 12 17:26:08.258947 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:26:08.258955 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:26:08.258962 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:26:08.258969 kernel: CPU features: detected: Privileged Access Never Dec 12 17:26:08.258977 kernel: CPU features: detected: RAS Extension Support Dec 12 17:26:08.258984 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:26:08.258992 kernel: alternatives: applying system-wide alternatives Dec 12 17:26:08.259000 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:26:08.259008 kernel: Memory: 2450912K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 99040K reserved, 16384K cma-reserved) Dec 12 17:26:08.259016 kernel: devtmpfs: initialized Dec 12 17:26:08.259023 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:26:08.259031 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.259038 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:26:08.259046 kernel: 0 pages in range for non-PLT usage Dec 12 17:26:08.259054 kernel: 515184 pages in range for PLT usage Dec 12 17:26:08.259062 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:26:08.259069 kernel: SMBIOS 3.0.0 present. Dec 12 17:26:08.259076 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:26:08.259084 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:26:08.259091 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:26:08.259099 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:26:08.259108 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:26:08.259121 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:26:08.259130 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:26:08.259138 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Dec 12 17:26:08.259157 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:26:08.259166 kernel: cpuidle: using governor menu Dec 12 17:26:08.259174 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:26:08.259183 kernel: ASID allocator initialised with 32768 entries Dec 12 17:26:08.259191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:26:08.259198 kernel: Serial: AMBA PL011 UART driver Dec 12 17:26:08.259206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:26:08.259214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:26:08.259221 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:26:08.259229 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:26:08.259236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:26:08.259250 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:26:08.259259 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:26:08.259266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:26:08.259273 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:26:08.259281 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:26:08.259288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:26:08.259296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:26:08.259305 kernel: ACPI: Interpreter enabled Dec 12 17:26:08.259312 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:26:08.259319 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:26:08.259327 kernel: ACPI: CPU0 has been hot-added Dec 12 17:26:08.259334 kernel: ACPI: CPU1 has been hot-added Dec 12 17:26:08.259342 kernel: ACPI: CPU2 has been hot-added Dec 12 17:26:08.259349 kernel: ACPI: CPU3 has been hot-added Dec 12 17:26:08.259356 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:26:08.259365 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:26:08.259372 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:26:08.259534 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:26:08.259625 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:26:08.259706 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:26:08.259789 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:26:08.259869 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:26:08.259878 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:26:08.259886 kernel: PCI host bridge to bus 0000:00 Dec 12 17:26:08.259970 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:26:08.260057 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:26:08.260171 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:26:08.260272 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:26:08.260373 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:26:08.260465 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:26:08.260554 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:26:08.260637 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:26:08.260721 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:26:08.260802 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:26:08.260884 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:26:08.260965 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:26:08.261038 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:26:08.261110 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:26:08.261198 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:26:08.261208 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:26:08.261216 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:26:08.261224 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:26:08.261231 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:26:08.261246 kernel: iommu: Default domain type: Translated Dec 12 17:26:08.261261 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:26:08.261269 kernel: efivars: Registered efivars operations Dec 12 17:26:08.261277 kernel: vgaarb: loaded Dec 12 17:26:08.261285 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:26:08.261293 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:26:08.261301 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:26:08.261308 kernel: pnp: PnP ACPI init Dec 12 17:26:08.261403 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:26:08.261414 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:26:08.261421 kernel: NET: Registered PF_INET protocol family Dec 12 17:26:08.261429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:26:08.261437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:26:08.261444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:26:08.261452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:26:08.261461 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:26:08.261473 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:26:08.261480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.261488 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:26:08.261496 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:26:08.261503 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:26:08.261511 kernel: kvm [1]: HYP mode not available Dec 12 17:26:08.261520 kernel: Initialise system trusted keyrings Dec 12 17:26:08.261528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:26:08.261535 kernel: Key type asymmetric registered Dec 12 17:26:08.261543 kernel: Asymmetric key parser 'x509' registered Dec 12 17:26:08.261550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:26:08.261558 kernel: io scheduler mq-deadline registered Dec 12 17:26:08.261565 kernel: io scheduler kyber registered Dec 12 17:26:08.261574 kernel: io scheduler bfq registered Dec 12 17:26:08.261582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:26:08.261589 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:26:08.261598 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:26:08.261677 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:26:08.261687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:26:08.261695 kernel: thunder_xcv, ver 1.0 Dec 12 17:26:08.261704 kernel: thunder_bgx, ver 1.0 Dec 12 17:26:08.261711 kernel: nicpf, ver 1.0 Dec 12 17:26:08.261719 kernel: nicvf, ver 1.0 Dec 12 17:26:08.261808 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:26:08.261885 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:26:07 UTC (1765560367) Dec 12 17:26:08.261895 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:26:08.261903 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:26:08.261912 kernel: watchdog: NMI not fully supported Dec 12 17:26:08.261920 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:26:08.261933 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:26:08.261942 kernel: Segment Routing with IPv6 Dec 12 17:26:08.261952 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:26:08.261963 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:26:08.261972 kernel: Key type dns_resolver registered Dec 12 17:26:08.261984 kernel: registered taskstats version 1 Dec 12 17:26:08.261991 kernel: Loading compiled-in X.509 certificates Dec 12 17:26:08.262000 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:26:08.262007 kernel: Demotion targets for Node 0: null Dec 12 17:26:08.262015 kernel: Key type .fscrypt registered Dec 12 17:26:08.262022 kernel: Key type fscrypt-provisioning registered Dec 12 17:26:08.262030 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:26:08.262039 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:26:08.262046 kernel: ima: No architecture policies found Dec 12 17:26:08.262054 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:26:08.262062 kernel: clk: Disabling unused clocks Dec 12 17:26:08.262070 kernel: PM: genpd: Disabling unused power domains Dec 12 17:26:08.262077 kernel: Freeing unused kernel memory: 12416K Dec 12 17:26:08.262085 kernel: Run /init as init process Dec 12 17:26:08.262093 kernel: with arguments: Dec 12 17:26:08.262101 kernel: /init Dec 12 17:26:08.262108 kernel: with environment: Dec 12 17:26:08.262121 kernel: HOME=/ Dec 12 17:26:08.262130 kernel: TERM=linux Dec 12 17:26:08.262228 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:26:08.262321 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 12 17:26:08.262335 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:26:08.262343 kernel: GPT:16515071 != 27000831 Dec 12 17:26:08.262351 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:26:08.262358 kernel: GPT:16515071 != 27000831 Dec 12 17:26:08.262366 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:26:08.262373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:26:08.262382 kernel: SCSI subsystem initialized Dec 12 17:26:08.262390 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:26:08.262397 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:26:08.262405 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:26:08.262412 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:26:08.262420 kernel: raid6: neonx8 gen() 15758 MB/s Dec 12 17:26:08.262427 kernel: raid6: neonx4 gen() 15717 MB/s Dec 12 17:26:08.262436 kernel: raid6: neonx2 gen() 13264 MB/s Dec 12 17:26:08.262444 kernel: raid6: neonx1 gen() 10466 MB/s Dec 12 17:26:08.262451 kernel: raid6: int64x8 gen() 6821 MB/s Dec 12 17:26:08.262458 kernel: raid6: int64x4 gen() 7316 MB/s Dec 12 17:26:08.262466 kernel: raid6: int64x2 gen() 6098 MB/s Dec 12 17:26:08.262473 kernel: raid6: int64x1 gen() 5017 MB/s Dec 12 17:26:08.262481 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Dec 12 17:26:08.262488 kernel: raid6: .... xor() 11969 MB/s, rmw enabled Dec 12 17:26:08.262497 kernel: raid6: using neon recovery algorithm Dec 12 17:26:08.262504 kernel: xor: measuring software checksum speed Dec 12 17:26:08.262512 kernel: 8regs : 21601 MB/sec Dec 12 17:26:08.262519 kernel: 32regs : 21653 MB/sec Dec 12 17:26:08.262527 kernel: arm64_neon : 27132 MB/sec Dec 12 17:26:08.262534 kernel: xor: using function: arm64_neon (27132 MB/sec) Dec 12 17:26:08.262542 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:26:08.262551 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (206) Dec 12 17:26:08.262559 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:26:08.262566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:08.262574 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:26:08.262582 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:26:08.262589 kernel: loop: module loaded Dec 12 17:26:08.262597 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:26:08.262606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:26:08.262615 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:26:08.262625 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:26:08.262634 systemd[1]: Detected virtualization kvm. Dec 12 17:26:08.262642 systemd[1]: Detected architecture arm64. Dec 12 17:26:08.262650 systemd[1]: Running in initrd. Dec 12 17:26:08.262659 systemd[1]: No hostname configured, using default hostname. Dec 12 17:26:08.262667 systemd[1]: Hostname set to . Dec 12 17:26:08.262675 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:26:08.262685 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:26:08.262694 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:26:08.262705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:08.262715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:08.262723 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:26:08.262732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:26:08.262740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:26:08.262748 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:26:08.262757 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:08.262766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:08.262774 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:26:08.262782 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:26:08.262790 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:26:08.262798 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:26:08.262807 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:26:08.262816 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:26:08.262824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:26:08.262833 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:08.262841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:26:08.262856 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:26:08.262865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:08.262875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:08.262883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:08.262892 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:26:08.262901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:26:08.262909 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:26:08.262917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:26:08.262927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:26:08.262936 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:26:08.262944 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:26:08.262952 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:26:08.262960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:26:08.262972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:08.262980 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:26:08.262989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:08.262997 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:26:08.263006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:26:08.263032 systemd-journald[348]: Collecting audit messages is enabled. Dec 12 17:26:08.263052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:26:08.263060 kernel: Bridge firewalling registered Dec 12 17:26:08.263068 systemd-journald[348]: Journal started Dec 12 17:26:08.263088 systemd-journald[348]: Runtime Journal (/run/log/journal/12de247c0e10428ca5507dec25acdab4) is 6M, max 48.5M, 42.4M free. Dec 12 17:26:08.262640 systemd-modules-load[349]: Inserted module 'br_netfilter' Dec 12 17:26:08.270167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:08.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.274294 kernel: audit: type=1130 audit(1765560368.270:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.274326 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:26:08.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.277457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:08.281748 kernel: audit: type=1130 audit(1765560368.274:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.281773 kernel: audit: type=1130 audit(1765560368.278:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.281766 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:26:08.286553 kernel: audit: type=1130 audit(1765560368.282:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.285844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:26:08.288176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:08.290061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:26:08.302994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:26:08.312825 systemd-tmpfiles[372]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:26:08.314489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:08.319496 kernel: audit: type=1130 audit(1765560368.316:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.318622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:08.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.321927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:26:08.328559 kernel: audit: type=1130 audit(1765560368.321:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.328583 kernel: audit: type=1130 audit(1765560368.325:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.328564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:08.333085 kernel: audit: type=1130 audit(1765560368.329:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.331656 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:26:08.334000 audit: BPF prog-id=6 op=LOAD Dec 12 17:26:08.337130 kernel: audit: type=1334 audit(1765560368.334:10): prog-id=6 op=LOAD Dec 12 17:26:08.335472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:26:08.357379 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:26:08.380096 systemd-resolved[389]: Positive Trust Anchors: Dec 12 17:26:08.380136 systemd-resolved[389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:26:08.380140 systemd-resolved[389]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:26:08.380170 systemd-resolved[389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:26:08.402285 systemd-resolved[389]: Defaulting to hostname 'linux'. Dec 12 17:26:08.403314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:26:08.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.404407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:08.446143 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:26:08.454247 kernel: iscsi: registered transport (tcp) Dec 12 17:26:08.467559 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:26:08.467608 kernel: QLogic iSCSI HBA Driver Dec 12 17:26:08.489295 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:26:08.506354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:08.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.507827 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:26:08.556201 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:26:08.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.558212 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:26:08.576109 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:26:08.598157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:26:08.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.600000 audit: BPF prog-id=7 op=LOAD Dec 12 17:26:08.600000 audit: BPF prog-id=8 op=LOAD Dec 12 17:26:08.600804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:08.634205 systemd-udevd[627]: Using default interface naming scheme 'v257'. Dec 12 17:26:08.642070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:08.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.646400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:26:08.667248 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:26:08.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.670000 audit: BPF prog-id=9 op=LOAD Dec 12 17:26:08.671180 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:26:08.673511 dracut-pre-trigger[707]: rd.md=0: removing MD RAID activation Dec 12 17:26:08.698195 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:26:08.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.701377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:26:08.717170 systemd-networkd[738]: lo: Link UP Dec 12 17:26:08.717179 systemd-networkd[738]: lo: Gained carrier Dec 12 17:26:08.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.717626 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:26:08.719012 systemd[1]: Reached target network.target - Network. Dec 12 17:26:08.757570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:08.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.761545 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:26:08.804293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:26:08.811823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:26:08.823909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:26:08.839415 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:26:08.845747 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:08.845759 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:26:08.846777 systemd-networkd[738]: eth0: Link UP Dec 12 17:26:08.847214 systemd-networkd[738]: eth0: Gained carrier Dec 12 17:26:08.847224 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:08.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.849646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:26:08.851345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:26:08.851460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:08.853076 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:08.864866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:08.866781 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:26:08.870664 disk-uuid[804]: Primary Header is updated. Dec 12 17:26:08.870664 disk-uuid[804]: Secondary Entries is updated. Dec 12 17:26:08.870664 disk-uuid[804]: Secondary Header is updated. Dec 12 17:26:08.883628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:26:08.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.886503 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:26:08.889249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:08.893067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:26:08.897884 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:26:08.901568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:08.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.925599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:26:08.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.905109 disk-uuid[806]: Warning: The kernel is still using the old partition table. Dec 12 17:26:09.905109 disk-uuid[806]: The new table will be used at the next reboot or after you Dec 12 17:26:09.905109 disk-uuid[806]: run partprobe(8) or kpartx(8) Dec 12 17:26:09.905109 disk-uuid[806]: The operation has completed successfully. Dec 12 17:26:09.916162 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:26:09.917193 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:26:09.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.919544 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:26:09.947939 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Dec 12 17:26:09.947991 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:09.949159 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:09.951844 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:26:09.951881 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:26:09.957147 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:09.958135 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:26:09.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:09.960083 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:26:10.059914 ignition[854]: Ignition 2.22.0 Dec 12 17:26:10.059932 ignition[854]: Stage: fetch-offline Dec 12 17:26:10.059969 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:10.059979 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:10.060138 ignition[854]: parsed url from cmdline: "" Dec 12 17:26:10.060141 ignition[854]: no config URL provided Dec 12 17:26:10.060146 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:26:10.060155 ignition[854]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:26:10.060191 ignition[854]: op(1): [started] loading QEMU firmware config module Dec 12 17:26:10.060194 ignition[854]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:26:10.065504 ignition[854]: op(1): [finished] loading QEMU firmware config module Dec 12 17:26:10.109157 ignition[854]: parsing config with SHA512: 91721972d3d9c15934e6ca56cfe178dd066332b6033cbaccec4f036af3643fce673057f2fb55cb07085e1d1d2d8c71645e300693d947816fee3ad82f97e4e195 Dec 12 17:26:10.114140 unknown[854]: fetched base config from "system" Dec 12 17:26:10.114156 unknown[854]: fetched user config from "qemu" Dec 12 17:26:10.114542 ignition[854]: fetch-offline: fetch-offline passed Dec 12 17:26:10.114603 ignition[854]: Ignition finished successfully Dec 12 17:26:10.118051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:10.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.119879 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:26:10.120702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:26:10.153879 ignition[868]: Ignition 2.22.0 Dec 12 17:26:10.153900 ignition[868]: Stage: kargs Dec 12 17:26:10.154052 ignition[868]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:10.154061 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:10.154854 ignition[868]: kargs: kargs passed Dec 12 17:26:10.154895 ignition[868]: Ignition finished successfully Dec 12 17:26:10.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.159226 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:26:10.161610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:26:10.192107 ignition[876]: Ignition 2.22.0 Dec 12 17:26:10.192142 ignition[876]: Stage: disks Dec 12 17:26:10.192288 ignition[876]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:10.192296 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:10.193021 ignition[876]: disks: disks passed Dec 12 17:26:10.195733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:26:10.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.193062 ignition[876]: Ignition finished successfully Dec 12 17:26:10.196934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:10.199319 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:26:10.201025 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:10.202981 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:10.205563 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:10.208407 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:26:10.248829 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 17:26:10.253091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:26:10.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.255866 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:26:10.319039 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:26:10.320674 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:26:10.320330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:10.323103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:10.324781 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:26:10.325750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:26:10.325785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:26:10.325812 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:10.339798 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:26:10.342471 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:26:10.348148 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Dec 12 17:26:10.348192 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:10.348204 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:10.352174 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:26:10.352234 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:26:10.353164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:10.382249 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:26:10.386646 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:26:10.390413 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:26:10.393833 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:26:10.468372 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:10.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.470639 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:26:10.472219 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:26:10.497486 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:26:10.499895 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:10.508640 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:26:10.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.539525 ignition[1009]: INFO : Ignition 2.22.0 Dec 12 17:26:10.539525 ignition[1009]: INFO : Stage: mount Dec 12 17:26:10.541043 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:10.541043 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:10.541043 ignition[1009]: INFO : mount: mount passed Dec 12 17:26:10.541043 ignition[1009]: INFO : Ignition finished successfully Dec 12 17:26:10.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:10.542615 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:26:10.544934 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:26:10.776269 systemd-networkd[738]: eth0: Gained IPv6LL Dec 12 17:26:11.320634 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:26:11.350137 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Dec 12 17:26:11.350181 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:26:11.352137 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:26:11.354525 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:26:11.354548 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:26:11.355900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:26:11.387817 ignition[1039]: INFO : Ignition 2.22.0 Dec 12 17:26:11.387817 ignition[1039]: INFO : Stage: files Dec 12 17:26:11.389572 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:11.389572 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:11.389572 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:26:11.392750 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:26:11.392750 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:26:11.392750 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:26:11.396849 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:26:11.396849 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:26:11.396849 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:11.396849 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:26:11.393147 unknown[1039]: wrote ssh authorized keys file for user: core Dec 12 17:26:11.448561 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:26:11.626792 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:26:11.626792 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:26:11.630694 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:11.647422 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:11.647422 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:11.647422 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:26:12.041839 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:26:12.346400 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:26:12.346400 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:26:12.350101 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:26:12.366027 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:26:12.369802 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:26:12.372301 ignition[1039]: INFO : files: files passed Dec 12 17:26:12.372301 ignition[1039]: INFO : Ignition finished successfully Dec 12 17:26:12.385749 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 17:26:12.385782 kernel: audit: type=1130 audit(1765560372.374:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.373087 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:26:12.376070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:26:12.398911 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:26:12.401827 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:26:12.402823 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:26:12.409742 kernel: audit: type=1130 audit(1765560372.403:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.409769 kernel: audit: type=1131 audit(1765560372.403:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.409839 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:26:12.411239 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:12.411239 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:12.413999 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:26:12.412835 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:12.420932 kernel: audit: type=1130 audit(1765560372.415:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.415405 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:26:12.420648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:26:12.478335 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:26:12.478469 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:26:12.485844 kernel: audit: type=1130 audit(1765560372.480:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.485868 kernel: audit: type=1131 audit(1765560372.480:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.480628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:26:12.486788 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:26:12.488914 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:26:12.489859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:26:12.522296 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:12.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.525258 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:26:12.529015 kernel: audit: type=1130 audit(1765560372.523:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.545681 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:26:12.545895 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:12.548716 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:12.550562 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:26:12.552318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:26:12.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.552472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:26:12.558678 kernel: audit: type=1131 audit(1765560372.554:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.557658 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:26:12.559756 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:26:12.561594 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:26:12.563694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:26:12.565417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:26:12.567469 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:26:12.569621 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:26:12.571272 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:26:12.573265 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:26:12.575592 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:26:12.577396 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:26:12.579131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:26:12.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.579283 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:26:12.585265 kernel: audit: type=1131 audit(1765560372.580:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.584344 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:12.586494 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:12.588416 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:26:12.589238 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:12.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.590456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:26:12.596446 kernel: audit: type=1131 audit(1765560372.592:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.590598 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:26:12.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.595628 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:26:12.595814 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:26:12.597781 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:26:12.599299 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:26:12.606165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:12.607431 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:26:12.609440 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:26:12.611052 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:26:12.611173 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:26:12.612798 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:26:12.612877 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:26:12.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.614423 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:26:12.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.614494 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:12.616365 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:26:12.616493 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:26:12.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.618464 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:26:12.618581 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:26:12.621276 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:26:12.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.623028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:26:12.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.623175 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:12.626268 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:26:12.627097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:26:12.627261 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:12.629325 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:26:12.629439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:12.631270 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:26:12.631377 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:26:12.637427 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:26:12.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.641745 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:26:12.650969 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:26:12.663524 ignition[1096]: INFO : Ignition 2.22.0 Dec 12 17:26:12.663524 ignition[1096]: INFO : Stage: umount Dec 12 17:26:12.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.668363 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:26:12.668363 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:26:12.668363 ignition[1096]: INFO : umount: umount passed Dec 12 17:26:12.668363 ignition[1096]: INFO : Ignition finished successfully Dec 12 17:26:12.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.666342 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:26:12.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.666438 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:26:12.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.668544 systemd[1]: Stopped target network.target - Network. Dec 12 17:26:12.671758 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:26:12.671847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:26:12.673914 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:26:12.673973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:26:12.675532 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:26:12.675588 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:26:12.677209 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:26:12.677265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:26:12.679324 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:26:12.681176 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:26:12.693010 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:26:12.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.693155 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:26:12.697681 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:26:12.697792 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:26:12.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.701505 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:26:12.701596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:26:12.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.703000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:26:12.704030 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:26:12.706297 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:26:12.706346 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:12.708000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:26:12.708278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:26:12.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.708338 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:26:12.711441 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:26:12.712951 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:26:12.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.713008 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:26:12.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.715084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:26:12.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.715146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:12.716987 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:26:12.717033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:12.719317 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:12.736032 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:26:12.736230 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:12.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.739568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:26:12.739637 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:12.741918 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:26:12.741959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:12.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.743899 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:26:12.743952 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:26:12.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.746645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:26:12.746697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:26:12.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.749798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:26:12.749862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:26:12.758809 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:26:12.759926 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:26:12.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.759993 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:12.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.762493 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:26:12.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.762539 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:12.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.764376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:26:12.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:12.764433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:12.767430 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:26:12.767540 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:26:12.768832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:26:12.768911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:26:12.771793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:26:12.773869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:26:12.784250 systemd[1]: Switching root. Dec 12 17:26:12.822525 systemd-journald[348]: Journal stopped Dec 12 17:26:13.685441 systemd-journald[348]: Received SIGTERM from PID 1 (systemd). Dec 12 17:26:13.685495 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:26:13.685516 kernel: SELinux: policy capability open_perms=1 Dec 12 17:26:13.685526 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:26:13.685540 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:26:13.685550 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:26:13.685560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:26:13.685571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:26:13.685581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:26:13.685592 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:26:13.685602 systemd[1]: Successfully loaded SELinux policy in 60.750ms. Dec 12 17:26:13.685620 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.025ms. Dec 12 17:26:13.685632 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:26:13.685643 systemd[1]: Detected virtualization kvm. Dec 12 17:26:13.685654 systemd[1]: Detected architecture arm64. Dec 12 17:26:13.685664 systemd[1]: Detected first boot. Dec 12 17:26:13.685676 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:26:13.685686 zram_generator::config[1141]: No configuration found. Dec 12 17:26:13.685698 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:26:13.685708 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:26:13.685721 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:26:13.685731 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:26:13.685743 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:13.685754 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:26:13.685765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:26:13.685775 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:26:13.685786 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:26:13.685796 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:26:13.685808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:26:13.685826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:26:13.685837 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:26:13.685848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:26:13.685859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:26:13.685870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:26:13.685880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:26:13.685892 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:26:13.685902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:26:13.685913 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:26:13.685924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:26:13.685934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:26:13.685945 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:26:13.685958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:26:13.685968 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:26:13.685979 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:26:13.685990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:26:13.686000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:26:13.686011 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:26:13.686021 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:26:13.686031 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:26:13.686043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:26:13.686054 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:26:13.686064 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:26:13.686075 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:26:13.686085 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:26:13.686096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:26:13.686106 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:26:13.686163 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:26:13.686183 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:26:13.686194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:26:13.686206 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:26:13.686222 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:26:13.686234 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:26:13.686244 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:26:13.686257 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:26:13.686268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:26:13.686278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:26:13.686289 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:26:13.686300 systemd[1]: Reached target machines.target - Containers. Dec 12 17:26:13.686310 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:26:13.686323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:13.686334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:26:13.686344 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:13.686355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:13.686365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:13.686375 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:13.686386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:13.686398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:13.686409 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:26:13.686419 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:26:13.686429 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:26:13.686439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:26:13.686450 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:26:13.686461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:13.686473 kernel: fuse: init (API version 7.41) Dec 12 17:26:13.686483 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:26:13.686494 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:26:13.686504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:26:13.686516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:26:13.686526 kernel: ACPI: bus type drm_connector registered Dec 12 17:26:13.686537 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:26:13.686547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:26:13.686557 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:26:13.686600 systemd-journald[1216]: Collecting audit messages is enabled. Dec 12 17:26:13.686633 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:26:13.686644 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:26:13.686656 systemd-journald[1216]: Journal started Dec 12 17:26:13.686676 systemd-journald[1216]: Runtime Journal (/run/log/journal/12de247c0e10428ca5507dec25acdab4) is 6M, max 48.5M, 42.4M free. Dec 12 17:26:13.560000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:26:13.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.652000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:26:13.652000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:26:13.653000 audit: BPF prog-id=15 op=LOAD Dec 12 17:26:13.653000 audit: BPF prog-id=16 op=LOAD Dec 12 17:26:13.653000 audit: BPF prog-id=17 op=LOAD Dec 12 17:26:13.684000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:26:13.684000 audit[1216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe4264ec0 a2=4000 a3=0 items=0 ppid=1 pid=1216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:13.684000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:26:13.461301 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:26:13.486252 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:26:13.486698 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:26:13.689858 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:26:13.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.691005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:26:13.693306 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:26:13.694473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:26:13.696307 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:26:13.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.697745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:26:13.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.699269 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:13.699417 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:13.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.700968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:13.701147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:13.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.702425 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:13.702569 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:13.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.703953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:13.704099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:13.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.705952 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:13.706145 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:13.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.707430 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:13.707578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:13.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.709238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:26:13.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.712270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:26:13.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.714311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:26:13.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.715946 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:26:13.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.727848 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:26:13.729674 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:26:13.730872 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:26:13.730905 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:26:13.732706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:26:13.734108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:13.734242 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:13.735650 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:26:13.737640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:26:13.738774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:13.741292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:26:13.742326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:13.743144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:13.746670 systemd-journald[1216]: Time spent on flushing to /var/log/journal/12de247c0e10428ca5507dec25acdab4 is 18.868ms for 1002 entries. Dec 12 17:26:13.746670 systemd-journald[1216]: System Journal (/var/log/journal/12de247c0e10428ca5507dec25acdab4) is 8M, max 163.5M, 155.5M free. Dec 12 17:26:13.774326 systemd-journald[1216]: Received client request to flush runtime journal. Dec 12 17:26:13.774380 kernel: loop1: detected capacity change from 0 to 211168 Dec 12 17:26:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.747282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:26:13.751449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:26:13.755162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:26:13.762527 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:26:13.765211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:13.766424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:26:13.770288 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:26:13.784553 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:26:13.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.788261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:26:13.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.791000 audit: BPF prog-id=18 op=LOAD Dec 12 17:26:13.791000 audit: BPF prog-id=19 op=LOAD Dec 12 17:26:13.791000 audit: BPF prog-id=20 op=LOAD Dec 12 17:26:13.795159 kernel: loop2: detected capacity change from 0 to 100192 Dec 12 17:26:13.792694 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:26:13.795000 audit: BPF prog-id=21 op=LOAD Dec 12 17:26:13.798287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:26:13.802350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:26:13.806000 audit: BPF prog-id=22 op=LOAD Dec 12 17:26:13.806000 audit: BPF prog-id=23 op=LOAD Dec 12 17:26:13.814000 audit: BPF prog-id=24 op=LOAD Dec 12 17:26:13.816000 audit: BPF prog-id=25 op=LOAD Dec 12 17:26:13.814872 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:26:13.816000 audit: BPF prog-id=26 op=LOAD Dec 12 17:26:13.816000 audit: BPF prog-id=27 op=LOAD Dec 12 17:26:13.817570 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:26:13.818907 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:26:13.822166 kernel: loop3: detected capacity change from 0 to 109872 Dec 12 17:26:13.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.827459 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 12 17:26:13.827478 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 12 17:26:13.831370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:26:13.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.849158 kernel: loop4: detected capacity change from 0 to 211168 Dec 12 17:26:13.853926 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:26:13.854541 systemd-nsresourced[1274]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:26:13.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.863139 kernel: loop5: detected capacity change from 0 to 100192 Dec 12 17:26:13.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:13.860241 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:26:13.872142 kernel: loop6: detected capacity change from 0 to 109872 Dec 12 17:26:13.877736 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 12 17:26:13.881665 (sd-merge)[1280]: Merged extensions into '/usr'. Dec 12 17:26:13.888294 systemd[1]: Reload requested from client PID 1256 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:26:13.888309 systemd[1]: Reloading... Dec 12 17:26:13.906587 systemd-oomd[1270]: No swap; memory pressure usage will be degraded Dec 12 17:26:13.917304 systemd-resolved[1271]: Positive Trust Anchors: Dec 12 17:26:13.917323 systemd-resolved[1271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:26:13.917327 systemd-resolved[1271]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:26:13.917359 systemd-resolved[1271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:26:13.925231 systemd-resolved[1271]: Defaulting to hostname 'linux'. Dec 12 17:26:13.946167 zram_generator::config[1327]: No configuration found. Dec 12 17:26:14.079569 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:26:14.079889 systemd[1]: Reloading finished in 191 ms. Dec 12 17:26:14.096839 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:26:14.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.098257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:26:14.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.099534 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:26:14.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.103022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:26:14.114344 systemd[1]: Starting ensure-sysext.service... Dec 12 17:26:14.117000 audit: BPF prog-id=28 op=LOAD Dec 12 17:26:14.117000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:26:14.117000 audit: BPF prog-id=29 op=LOAD Dec 12 17:26:14.117000 audit: BPF prog-id=30 op=LOAD Dec 12 17:26:14.117000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:26:14.117000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:26:14.116158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:26:14.118000 audit: BPF prog-id=31 op=LOAD Dec 12 17:26:14.118000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:26:14.118000 audit: BPF prog-id=32 op=LOAD Dec 12 17:26:14.118000 audit: BPF prog-id=33 op=LOAD Dec 12 17:26:14.118000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:26:14.118000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:26:14.119000 audit: BPF prog-id=34 op=LOAD Dec 12 17:26:14.119000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:26:14.119000 audit: BPF prog-id=35 op=LOAD Dec 12 17:26:14.119000 audit: BPF prog-id=36 op=LOAD Dec 12 17:26:14.119000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:26:14.119000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:26:14.120000 audit: BPF prog-id=37 op=LOAD Dec 12 17:26:14.120000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:26:14.121000 audit: BPF prog-id=38 op=LOAD Dec 12 17:26:14.121000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:26:14.121000 audit: BPF prog-id=39 op=LOAD Dec 12 17:26:14.121000 audit: BPF prog-id=40 op=LOAD Dec 12 17:26:14.121000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:26:14.121000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:26:14.125187 systemd[1]: Reload requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:26:14.125202 systemd[1]: Reloading... Dec 12 17:26:14.134082 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:26:14.134109 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:26:14.134727 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:26:14.135692 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Dec 12 17:26:14.135735 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Dec 12 17:26:14.153995 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:14.154113 systemd-tmpfiles[1358]: Skipping /boot Dec 12 17:26:14.163226 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:26:14.164153 systemd-tmpfiles[1358]: Skipping /boot Dec 12 17:26:14.177153 zram_generator::config[1390]: No configuration found. Dec 12 17:26:14.320326 systemd[1]: Reloading finished in 194 ms. Dec 12 17:26:14.345915 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:26:14.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.348000 audit: BPF prog-id=41 op=LOAD Dec 12 17:26:14.348000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:26:14.348000 audit: BPF prog-id=42 op=LOAD Dec 12 17:26:14.348000 audit: BPF prog-id=43 op=LOAD Dec 12 17:26:14.348000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:26:14.348000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:26:14.349000 audit: BPF prog-id=44 op=LOAD Dec 12 17:26:14.349000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:26:14.350000 audit: BPF prog-id=45 op=LOAD Dec 12 17:26:14.350000 audit: BPF prog-id=46 op=LOAD Dec 12 17:26:14.350000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:26:14.350000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:26:14.350000 audit: BPF prog-id=47 op=LOAD Dec 12 17:26:14.350000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:26:14.350000 audit: BPF prog-id=48 op=LOAD Dec 12 17:26:14.350000 audit: BPF prog-id=49 op=LOAD Dec 12 17:26:14.350000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:26:14.350000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:26:14.351000 audit: BPF prog-id=50 op=LOAD Dec 12 17:26:14.351000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:26:14.352000 audit: BPF prog-id=51 op=LOAD Dec 12 17:26:14.352000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:26:14.352000 audit: BPF prog-id=52 op=LOAD Dec 12 17:26:14.352000 audit: BPF prog-id=53 op=LOAD Dec 12 17:26:14.352000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:26:14.352000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:26:14.373743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:26:14.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.381618 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:14.384098 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:26:14.397552 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:26:14.401000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:26:14.401000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:26:14.400133 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:26:14.402000 audit: BPF prog-id=54 op=LOAD Dec 12 17:26:14.402000 audit: BPF prog-id=55 op=LOAD Dec 12 17:26:14.404343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:26:14.407111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:26:14.412283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:14.420000 audit[1435]: SYSTEM_BOOT pid=1435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.417392 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:14.421268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:14.425083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:14.429683 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:14.431948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:14.434293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:14.434479 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:14.434577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:14.435925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:14.436113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:14.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.440937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:14.441167 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:14.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.446274 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:14.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.446455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:14.455478 systemd-udevd[1431]: Using default interface naming scheme 'v257'. Dec 12 17:26:14.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.456688 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:26:14.458944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:14.460173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:14.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.461730 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:14.461901 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:14.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.465608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:26:14.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.474403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:14.476386 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:14.478000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:26:14.478000 audit[1465]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb7c3d70 a2=420 a3=0 items=0 ppid=1426 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:14.478000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:14.478683 augenrules[1465]: No rules Dec 12 17:26:14.479378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:14.482389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:14.484877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:14.489459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:14.492289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:14.492472 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:14.492567 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:14.492659 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:26:14.494925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:26:14.497902 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:14.501809 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:14.505247 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:26:14.509469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:14.510013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:14.511974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:14.512613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:14.514912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:14.515088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:14.518753 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:14.521656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:14.525765 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:14.526244 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:14.543423 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:14.544556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:26:14.546415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:26:14.553851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:26:14.560533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:26:14.565521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:26:14.570156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:26:14.572712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:26:14.575450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:26:14.575566 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:26:14.575608 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:26:14.577911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:26:14.580198 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:26:14.581045 systemd[1]: Finished ensure-sysext.service. Dec 12 17:26:14.582288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:26:14.582478 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:26:14.584902 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:26:14.585091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:26:14.588531 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:26:14.588724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:26:14.592015 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:26:14.592267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:26:14.618749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:26:14.622331 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:26:14.626253 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:26:14.629930 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:26:14.636676 augenrules[1500]: /sbin/augenrules: No change Dec 12 17:26:14.643288 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:26:14.646466 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:26:14.649672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:26:14.650567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:26:14.653544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:26:14.653723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:26:14.655631 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:26:14.658193 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:26:14.665000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:14.665000 audit[1545]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcd417740 a2=420 a3=0 items=0 ppid=1500 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:14.665000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:14.666000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:26:14.666000 audit[1545]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcd419bc0 a2=420 a3=0 items=0 ppid=1500 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:14.666000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:14.666622 augenrules[1545]: No rules Dec 12 17:26:14.668604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:26:14.668664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:26:14.670660 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:14.670910 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:14.682525 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:26:14.685030 systemd-networkd[1512]: lo: Link UP Dec 12 17:26:14.685042 systemd-networkd[1512]: lo: Gained carrier Dec 12 17:26:14.686314 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:26:14.687820 systemd[1]: Reached target network.target - Network. Dec 12 17:26:14.689837 systemd-networkd[1512]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:14.689841 systemd-networkd[1512]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:26:14.690570 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:26:14.692864 systemd-networkd[1512]: eth0: Link UP Dec 12 17:26:14.692993 systemd-networkd[1512]: eth0: Gained carrier Dec 12 17:26:14.693013 systemd-networkd[1512]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:26:14.693372 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:26:14.707188 systemd-networkd[1512]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:26:14.710958 ldconfig[1428]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:26:14.715489 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:26:14.721065 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:26:14.727159 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:26:14.735960 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:26:14.736906 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:26:14.737327 systemd-timesyncd[1541]: Initial clock synchronization to Fri 2025-12-12 17:26:14.388965 UTC. Dec 12 17:26:14.737677 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:26:14.739272 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:26:14.740847 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:26:14.743334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:26:14.744545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:26:14.746071 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:26:14.747328 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:26:14.748639 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:26:14.751307 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:26:14.752311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:26:14.753434 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:26:14.753468 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:26:14.754408 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:26:14.756088 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:26:14.758437 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:26:14.764112 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:26:14.765544 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:26:14.767177 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:26:14.771148 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:26:14.772389 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:26:14.775076 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:26:14.776462 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:26:14.777696 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:26:14.778622 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:14.778654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:26:14.780343 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:26:14.782966 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:26:14.785746 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:26:14.793550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:26:14.795523 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:26:14.796636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:26:14.798059 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:26:14.800284 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:26:14.804320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:26:14.804756 jq[1583]: false Dec 12 17:26:14.808261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:26:14.814413 extend-filesystems[1584]: Found /dev/vda6 Dec 12 17:26:14.812461 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:26:14.817794 extend-filesystems[1584]: Found /dev/vda9 Dec 12 17:26:14.813496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:26:14.813952 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:26:14.815640 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:26:14.818937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:26:14.824216 extend-filesystems[1584]: Checking size of /dev/vda9 Dec 12 17:26:14.825500 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:26:14.827095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:26:14.827389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:26:14.828431 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:26:14.828649 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:26:14.835073 extend-filesystems[1584]: Resized partition /dev/vda9 Dec 12 17:26:14.843342 extend-filesystems[1607]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:26:14.848791 update_engine[1594]: I20251212 17:26:14.842156 1594 main.cc:92] Flatcar Update Engine starting Dec 12 17:26:14.849135 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 12 17:26:14.849180 jq[1597]: true Dec 12 17:26:14.853544 tar[1604]: linux-arm64/LICENSE Dec 12 17:26:14.853544 tar[1604]: linux-arm64/helm Dec 12 17:26:14.858630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:26:14.868008 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:26:14.874664 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:26:14.886818 dbus-daemon[1581]: [system] SELinux support is enabled Dec 12 17:26:14.887036 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:26:14.893090 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 12 17:26:14.891601 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:26:14.891633 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:26:14.907496 update_engine[1594]: I20251212 17:26:14.900781 1594 update_check_scheduler.cc:74] Next update check in 7m54s Dec 12 17:26:14.907526 jq[1624]: true Dec 12 17:26:14.894313 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:26:14.894330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:26:14.903229 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:26:14.908292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:26:14.909841 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:26:14.909841 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:26:14.909841 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 12 17:26:14.909657 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:26:14.917638 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Dec 12 17:26:14.916556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:26:14.951829 bash[1654]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:26:14.985069 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:26:14.987220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:26:14.993958 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:26:14.994178 systemd-logind[1592]: New seat seat0. Dec 12 17:26:14.994919 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:26:14.996066 containerd[1608]: time="2025-12-12T17:26:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:26:14.998136 containerd[1608]: time="2025-12-12T17:26:14.998078960Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:26:15.019538 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:26:15.026488 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:26:15.030826 containerd[1608]: time="2025-12-12T17:26:15.030776564Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.969µs" Dec 12 17:26:15.030826 containerd[1608]: time="2025-12-12T17:26:15.030812032Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:26:15.030894 containerd[1608]: time="2025-12-12T17:26:15.030862497Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:26:15.030894 containerd[1608]: time="2025-12-12T17:26:15.030875238Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:26:15.031033 containerd[1608]: time="2025-12-12T17:26:15.031011867Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:26:15.031057 containerd[1608]: time="2025-12-12T17:26:15.031032107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031103 containerd[1608]: time="2025-12-12T17:26:15.031088580Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031138 containerd[1608]: time="2025-12-12T17:26:15.031102316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031393 containerd[1608]: time="2025-12-12T17:26:15.031370179Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031414 containerd[1608]: time="2025-12-12T17:26:15.031393441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031414 containerd[1608]: time="2025-12-12T17:26:15.031405493Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031444 containerd[1608]: time="2025-12-12T17:26:15.031413413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031564 containerd[1608]: time="2025-12-12T17:26:15.031545872Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031588 containerd[1608]: time="2025-12-12T17:26:15.031563931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031642 containerd[1608]: time="2025-12-12T17:26:15.031628591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031808 containerd[1608]: time="2025-12-12T17:26:15.031789937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031838 containerd[1608]: time="2025-12-12T17:26:15.031825404Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:26:15.031859 containerd[1608]: time="2025-12-12T17:26:15.031838719Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:26:15.031883 containerd[1608]: time="2025-12-12T17:26:15.031872389Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:26:15.035416 containerd[1608]: time="2025-12-12T17:26:15.035364098Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:26:15.035491 containerd[1608]: time="2025-12-12T17:26:15.035473141Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:26:15.038832 containerd[1608]: time="2025-12-12T17:26:15.038798110Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:26:15.038888 containerd[1608]: time="2025-12-12T17:26:15.038848997Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:15.038964 containerd[1608]: time="2025-12-12T17:26:15.038924945Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:26:15.038964 containerd[1608]: time="2025-12-12T17:26:15.038962631Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:26:15.039028 containerd[1608]: time="2025-12-12T17:26:15.038977591Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:26:15.039028 containerd[1608]: time="2025-12-12T17:26:15.038990485Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:26:15.039028 containerd[1608]: time="2025-12-12T17:26:15.039001160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:26:15.039028 containerd[1608]: time="2025-12-12T17:26:15.039010572Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:26:15.039028 containerd[1608]: time="2025-12-12T17:26:15.039021935Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:26:15.039121 containerd[1608]: time="2025-12-12T17:26:15.039034370Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:26:15.039121 containerd[1608]: time="2025-12-12T17:26:15.039045504Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:26:15.039121 containerd[1608]: time="2025-12-12T17:26:15.039057365Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:26:15.039121 containerd[1608]: time="2025-12-12T17:26:15.039066012Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:26:15.039121 containerd[1608]: time="2025-12-12T17:26:15.039076878Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:26:15.039231 containerd[1608]: time="2025-12-12T17:26:15.039211020Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:26:15.039254 containerd[1608]: time="2025-12-12T17:26:15.039238147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:26:15.039270 containerd[1608]: time="2025-12-12T17:26:15.039254254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:26:15.039270 containerd[1608]: time="2025-12-12T17:26:15.039264776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:26:15.039309 containerd[1608]: time="2025-12-12T17:26:15.039274953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:26:15.039309 containerd[1608]: time="2025-12-12T17:26:15.039285399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:26:15.039309 containerd[1608]: time="2025-12-12T17:26:15.039299019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:26:15.039356 containerd[1608]: time="2025-12-12T17:26:15.039308623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:26:15.039356 containerd[1608]: time="2025-12-12T17:26:15.039319795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:26:15.039356 containerd[1608]: time="2025-12-12T17:26:15.039330240Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:26:15.039356 containerd[1608]: time="2025-12-12T17:26:15.039339691Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:26:15.039415 containerd[1608]: time="2025-12-12T17:26:15.039364828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:26:15.039415 containerd[1608]: time="2025-12-12T17:26:15.039400219Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:26:15.039415 containerd[1608]: time="2025-12-12T17:26:15.039413534Z" level=info msg="Start snapshots syncer" Dec 12 17:26:15.039481 containerd[1608]: time="2025-12-12T17:26:15.039451412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:26:15.040937 containerd[1608]: time="2025-12-12T17:26:15.040616871Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:26:15.041156 containerd[1608]: time="2025-12-12T17:26:15.041131324Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:26:15.041276 containerd[1608]: time="2025-12-12T17:26:15.041259651Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:26:15.041470 containerd[1608]: time="2025-12-12T17:26:15.041447281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:26:15.041691 containerd[1608]: time="2025-12-12T17:26:15.041670111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:26:15.041756 containerd[1608]: time="2025-12-12T17:26:15.041743763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:26:15.041815 containerd[1608]: time="2025-12-12T17:26:15.041804139Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:26:15.041967 containerd[1608]: time="2025-12-12T17:26:15.041952973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:26:15.042022 containerd[1608]: time="2025-12-12T17:26:15.042011282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:26:15.042069 containerd[1608]: time="2025-12-12T17:26:15.042058266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:26:15.042160 containerd[1608]: time="2025-12-12T17:26:15.042140259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:26:15.042214 containerd[1608]: time="2025-12-12T17:26:15.042202279Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:26:15.042306 containerd[1608]: time="2025-12-12T17:26:15.042282015Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:15.042486 containerd[1608]: time="2025-12-12T17:26:15.042469263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:26:15.042539 containerd[1608]: time="2025-12-12T17:26:15.042526960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:15.042592 containerd[1608]: time="2025-12-12T17:26:15.042579912Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:26:15.042635 containerd[1608]: time="2025-12-12T17:26:15.042624333Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:26:15.042703 containerd[1608]: time="2025-12-12T17:26:15.042690103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:26:15.042751 containerd[1608]: time="2025-12-12T17:26:15.042740263Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:26:15.042801 containerd[1608]: time="2025-12-12T17:26:15.042790193Z" level=info msg="runtime interface created" Dec 12 17:26:15.042840 containerd[1608]: time="2025-12-12T17:26:15.042830903Z" level=info msg="created NRI interface" Dec 12 17:26:15.042885 containerd[1608]: time="2025-12-12T17:26:15.042873946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:26:15.042943 containerd[1608]: time="2025-12-12T17:26:15.042931452Z" level=info msg="Connect containerd service" Dec 12 17:26:15.043007 containerd[1608]: time="2025-12-12T17:26:15.042994773Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:26:15.043741 containerd[1608]: time="2025-12-12T17:26:15.043709024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.106877052Z" level=info msg="Start subscribing containerd event" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.106918680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.106958280Z" level=info msg="Start recovering state" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.106995010Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107056725Z" level=info msg="Start event monitor" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107070384Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107086759Z" level=info msg="Start streaming server" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107095444Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107102714Z" level=info msg="runtime interface starting up..." Dec 12 17:26:15.108137 containerd[1608]: time="2025-12-12T17:26:15.107109295Z" level=info msg="starting plugins..." Dec 12 17:26:15.109159 containerd[1608]: time="2025-12-12T17:26:15.107139903Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:26:15.109445 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:26:15.113164 containerd[1608]: time="2025-12-12T17:26:15.111318392Z" level=info msg="containerd successfully booted in 0.115813s" Dec 12 17:26:15.206338 tar[1604]: linux-arm64/README.md Dec 12 17:26:15.223366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:26:15.919272 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:26:15.937659 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:26:15.941378 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:26:15.962191 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:26:15.962473 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:26:15.965060 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:26:15.986745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:26:15.989902 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:26:15.992595 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:26:15.994427 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:26:16.280240 systemd-networkd[1512]: eth0: Gained IPv6LL Dec 12 17:26:16.282761 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:26:16.284347 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:26:16.286532 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:26:16.288615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:16.290540 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:26:16.313205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:26:16.314482 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:26:16.314679 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:26:16.316971 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:26:16.825578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:16.827103 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:26:16.829897 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:16.831327 systemd[1]: Startup finished in 1.449s (kernel) + 5.023s (initrd) + 3.836s (userspace) = 10.310s. Dec 12 17:26:17.173846 kubelet[1727]: E1212 17:26:17.173790 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:17.175845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:17.175971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:17.176366 systemd[1]: kubelet.service: Consumed 755ms CPU time, 256.8M memory peak. Dec 12 17:26:18.777793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:26:18.778870 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Dec 12 17:26:18.852399 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:18.854251 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:18.860358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:26:18.861237 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:26:18.864755 systemd-logind[1592]: New session 1 of user core. Dec 12 17:26:18.883150 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:26:18.885396 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:26:18.897964 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:26:18.900279 systemd-logind[1592]: New session c1 of user core. Dec 12 17:26:19.008852 systemd[1746]: Queued start job for default target default.target. Dec 12 17:26:19.020214 systemd[1746]: Created slice app.slice - User Application Slice. Dec 12 17:26:19.020248 systemd[1746]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:26:19.020261 systemd[1746]: Reached target paths.target - Paths. Dec 12 17:26:19.020313 systemd[1746]: Reached target timers.target - Timers. Dec 12 17:26:19.021521 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:26:19.022266 systemd[1746]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:26:19.031879 systemd[1746]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:26:19.032611 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:26:19.032702 systemd[1746]: Reached target sockets.target - Sockets. Dec 12 17:26:19.032739 systemd[1746]: Reached target basic.target - Basic System. Dec 12 17:26:19.032765 systemd[1746]: Reached target default.target - Main User Target. Dec 12 17:26:19.032788 systemd[1746]: Startup finished in 125ms. Dec 12 17:26:19.033055 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:26:19.034742 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:26:19.060221 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:34974.service - OpenSSH per-connection server daemon (10.0.0.1:34974). Dec 12 17:26:19.119612 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 34974 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.120865 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.124861 systemd-logind[1592]: New session 2 of user core. Dec 12 17:26:19.135441 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:26:19.145666 sshd[1762]: Connection closed by 10.0.0.1 port 34974 Dec 12 17:26:19.146154 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:19.162333 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:34974.service: Deactivated successfully. Dec 12 17:26:19.163947 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:26:19.164720 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:26:19.167351 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:34990.service - OpenSSH per-connection server daemon (10.0.0.1:34990). Dec 12 17:26:19.167798 systemd-logind[1592]: Removed session 2. Dec 12 17:26:19.224127 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 34990 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.226189 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.230186 systemd-logind[1592]: New session 3 of user core. Dec 12 17:26:19.246340 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:26:19.253478 sshd[1771]: Connection closed by 10.0.0.1 port 34990 Dec 12 17:26:19.253349 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:19.257155 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:34990.service: Deactivated successfully. Dec 12 17:26:19.258757 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:26:19.261240 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:26:19.263299 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:35000.service - OpenSSH per-connection server daemon (10.0.0.1:35000). Dec 12 17:26:19.263855 systemd-logind[1592]: Removed session 3. Dec 12 17:26:19.319480 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 35000 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.320751 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.325186 systemd-logind[1592]: New session 4 of user core. Dec 12 17:26:19.337337 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:26:19.347923 sshd[1780]: Connection closed by 10.0.0.1 port 35000 Dec 12 17:26:19.348453 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:19.361247 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:35000.service: Deactivated successfully. Dec 12 17:26:19.363942 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:26:19.364800 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:26:19.367656 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:35008.service - OpenSSH per-connection server daemon (10.0.0.1:35008). Dec 12 17:26:19.368229 systemd-logind[1592]: Removed session 4. Dec 12 17:26:19.426073 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 35008 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.427196 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.431033 systemd-logind[1592]: New session 5 of user core. Dec 12 17:26:19.446319 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:26:19.462127 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:26:19.462404 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:19.484933 sudo[1790]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:19.486578 sshd[1789]: Connection closed by 10.0.0.1 port 35008 Dec 12 17:26:19.486924 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:19.497210 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:35008.service: Deactivated successfully. Dec 12 17:26:19.498805 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:26:19.499625 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:26:19.502078 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:35024.service - OpenSSH per-connection server daemon (10.0.0.1:35024). Dec 12 17:26:19.503020 systemd-logind[1592]: Removed session 5. Dec 12 17:26:19.556906 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 35024 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.558227 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.562825 systemd-logind[1592]: New session 6 of user core. Dec 12 17:26:19.572313 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:26:19.584729 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:26:19.585295 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:19.590513 sudo[1801]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:19.596484 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:26:19.596733 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:19.605001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:26:19.644000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:19.646363 kernel: kauditd_printk_skb: 181 callbacks suppressed Dec 12 17:26:19.646403 kernel: audit: type=1305 audit(1765560379.644:220): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:26:19.646417 augenrules[1823]: No rules Dec 12 17:26:19.644000 audit[1823]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf6d6a20 a2=420 a3=0 items=0 ppid=1804 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:19.650228 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:26:19.650486 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:26:19.651318 sudo[1800]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:19.652509 kernel: audit: type=1300 audit(1765560379.644:220): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf6d6a20 a2=420 a3=0 items=0 ppid=1804 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:19.652551 kernel: audit: type=1327 audit(1765560379.644:220): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:19.644000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:26:19.653234 sshd[1799]: Connection closed by 10.0.0.1 port 35024 Dec 12 17:26:19.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.654409 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:19.656855 kernel: audit: type=1130 audit(1765560379.648:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.656896 kernel: audit: type=1131 audit(1765560379.648:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.648000 audit[1800]: USER_END pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.661858 kernel: audit: type=1106 audit(1765560379.648:223): pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.661884 kernel: audit: type=1104 audit(1765560379.648:224): pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.648000 audit[1800]: CRED_DISP pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.653000 audit[1796]: USER_END pid=1796 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.667934 kernel: audit: type=1106 audit(1765560379.653:225): pid=1796 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.667960 kernel: audit: type=1104 audit(1765560379.653:226): pid=1796 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.653000 audit[1796]: CRED_DISP pid=1796 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.679144 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:35024.service: Deactivated successfully. Dec 12 17:26:19.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.57:22-10.0.0.1:35024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.681595 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:26:19.682939 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:26:19.684158 kernel: audit: type=1131 audit(1765560379.678:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.57:22-10.0.0.1:35024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.57:22-10.0.0.1:35038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.684388 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:35038.service - OpenSSH per-connection server daemon (10.0.0.1:35038). Dec 12 17:26:19.685376 systemd-logind[1592]: Removed session 6. Dec 12 17:26:19.735000 audit[1832]: USER_ACCT pid=1832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.736880 sshd[1832]: Accepted publickey for core from 10.0.0.1 port 35038 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:26:19.737000 audit[1832]: CRED_ACQ pid=1832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.737000 audit[1832]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc51244f0 a2=3 a3=0 items=0 ppid=1 pid=1832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:19.737000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:26:19.738903 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:26:19.748154 systemd-logind[1592]: New session 7 of user core. Dec 12 17:26:19.757319 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:26:19.758000 audit[1832]: USER_START pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.759000 audit[1835]: CRED_ACQ pid=1835 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:19.767000 audit[1836]: USER_ACCT pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.768735 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:26:19.767000 audit[1836]: CRED_REFR pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:19.768986 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:26:19.769000 audit[1836]: USER_START pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:20.034713 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:26:20.049454 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:26:20.254337 dockerd[1857]: time="2025-12-12T17:26:20.254277804Z" level=info msg="Starting up" Dec 12 17:26:20.255839 dockerd[1857]: time="2025-12-12T17:26:20.255804576Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:26:20.266476 dockerd[1857]: time="2025-12-12T17:26:20.266375558Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:26:20.468863 dockerd[1857]: time="2025-12-12T17:26:20.468582074Z" level=info msg="Loading containers: start." Dec 12 17:26:20.478139 kernel: Initializing XFRM netlink socket Dec 12 17:26:20.522000 audit[1913]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.522000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff1093440 a2=0 a3=0 items=0 ppid=1857 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.522000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:20.524000 audit[1915]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.524000 audit[1915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff0c72d20 a2=0 a3=0 items=0 ppid=1857 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.524000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:20.526000 audit[1917]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.526000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc3e6770 a2=0 a3=0 items=0 ppid=1857 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:20.528000 audit[1919]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.528000 audit[1919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbc68be0 a2=0 a3=0 items=0 ppid=1857 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.528000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:20.529000 audit[1921]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.529000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee908c30 a2=0 a3=0 items=0 ppid=1857 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:20.531000 audit[1923]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.531000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcd7277e0 a2=0 a3=0 items=0 ppid=1857 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.531000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:20.533000 audit[1925]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.533000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0779f50 a2=0 a3=0 items=0 ppid=1857 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.533000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:20.535000 audit[1927]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.535000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc65aaac0 a2=0 a3=0 items=0 ppid=1857 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.535000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:20.573000 audit[1930]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.573000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffd908f80 a2=0 a3=0 items=0 ppid=1857 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.573000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:26:20.575000 audit[1932]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.575000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe801f890 a2=0 a3=0 items=0 ppid=1857 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:20.577000 audit[1934]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.577000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdd466940 a2=0 a3=0 items=0 ppid=1857 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.577000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:20.578000 audit[1936]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.578000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff2b658c0 a2=0 a3=0 items=0 ppid=1857 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:20.581000 audit[1938]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.581000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffc024180 a2=0 a3=0 items=0 ppid=1857 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:20.611000 audit[1968]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.611000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd17ec9e0 a2=0 a3=0 items=0 ppid=1857 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:26:20.612000 audit[1970]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.612000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd44b3cf0 a2=0 a3=0 items=0 ppid=1857 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:26:20.614000 audit[1972]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.614000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed68f7f0 a2=0 a3=0 items=0 ppid=1857 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:26:20.616000 audit[1974]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.616000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6773020 a2=0 a3=0 items=0 ppid=1857 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:26:20.618000 audit[1976]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.618000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcf56ee50 a2=0 a3=0 items=0 ppid=1857 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:26:20.620000 audit[1978]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.620000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe5076100 a2=0 a3=0 items=0 ppid=1857 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:20.621000 audit[1980]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.621000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffee9ffb60 a2=0 a3=0 items=0 ppid=1857 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:20.624000 audit[1982]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.624000 audit[1982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffb96b180 a2=0 a3=0 items=0 ppid=1857 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:26:20.626000 audit[1984]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.626000 audit[1984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffffa3f100 a2=0 a3=0 items=0 ppid=1857 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:26:20.628000 audit[1986]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.628000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff3865f60 a2=0 a3=0 items=0 ppid=1857 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:26:20.629000 audit[1988]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.629000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffea919d60 a2=0 a3=0 items=0 ppid=1857 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:26:20.630000 audit[1990]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.630000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff5553580 a2=0 a3=0 items=0 ppid=1857 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:26:20.632000 audit[1992]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.632000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd007efc0 a2=0 a3=0 items=0 ppid=1857 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:26:20.637000 audit[1997]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.637000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda60cac0 a2=0 a3=0 items=0 ppid=1857 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:20.639000 audit[1999]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.639000 audit[1999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff69a82f0 a2=0 a3=0 items=0 ppid=1857 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:20.640000 audit[2001]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.640000 audit[2001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe00435b0 a2=0 a3=0 items=0 ppid=1857 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:20.642000 audit[2003]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.642000 audit[2003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffefa1be60 a2=0 a3=0 items=0 ppid=1857 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:26:20.644000 audit[2005]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.644000 audit[2005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd65cf080 a2=0 a3=0 items=0 ppid=1857 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:26:20.646000 audit[2007]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:20.646000 audit[2007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff42e3bf0 a2=0 a3=0 items=0 ppid=1857 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:26:20.660000 audit[2012]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.660000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffee1f72a0 a2=0 a3=0 items=0 ppid=1857 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:26:20.662000 audit[2015]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.662000 audit[2015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe8c9c450 a2=0 a3=0 items=0 ppid=1857 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:26:20.669000 audit[2023]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.669000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd2458b70 a2=0 a3=0 items=0 ppid=1857 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:26:20.676000 audit[2029]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.676000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff3f27b20 a2=0 a3=0 items=0 ppid=1857 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:20.678000 audit[2031]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.678000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe02b22b0 a2=0 a3=0 items=0 ppid=1857 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:26:20.680000 audit[2033]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.680000 audit[2033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffde5c0000 a2=0 a3=0 items=0 ppid=1857 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:26:20.682000 audit[2035]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.682000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc3ead200 a2=0 a3=0 items=0 ppid=1857 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:26:20.684000 audit[2037]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:20.684000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd2cc8ce0 a2=0 a3=0 items=0 ppid=1857 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:20.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:26:20.686268 systemd-networkd[1512]: docker0: Link UP Dec 12 17:26:20.691046 dockerd[1857]: time="2025-12-12T17:26:20.691003227Z" level=info msg="Loading containers: done." Dec 12 17:26:20.702786 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3804621181-merged.mount: Deactivated successfully. Dec 12 17:26:20.707917 dockerd[1857]: time="2025-12-12T17:26:20.707536833Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:26:20.707917 dockerd[1857]: time="2025-12-12T17:26:20.707622988Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:26:20.707917 dockerd[1857]: time="2025-12-12T17:26:20.707764284Z" level=info msg="Initializing buildkit" Dec 12 17:26:20.730586 dockerd[1857]: time="2025-12-12T17:26:20.730499855Z" level=info msg="Completed buildkit initialization" Dec 12 17:26:20.737830 dockerd[1857]: time="2025-12-12T17:26:20.737791483Z" level=info msg="Daemon has completed initialization" Dec 12 17:26:20.738075 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:26:20.738187 dockerd[1857]: time="2025-12-12T17:26:20.737992381Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:26:20.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:21.237657 containerd[1608]: time="2025-12-12T17:26:21.237618695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:26:22.174843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963997541.mount: Deactivated successfully. Dec 12 17:26:22.771547 containerd[1608]: time="2025-12-12T17:26:22.771499777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:22.772183 containerd[1608]: time="2025-12-12T17:26:22.772126922Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 12 17:26:22.773179 containerd[1608]: time="2025-12-12T17:26:22.773149045Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:22.776646 containerd[1608]: time="2025-12-12T17:26:22.776601312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:22.777599 containerd[1608]: time="2025-12-12T17:26:22.777572403Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.539910191s" Dec 12 17:26:22.777638 containerd[1608]: time="2025-12-12T17:26:22.777606844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:26:22.779124 containerd[1608]: time="2025-12-12T17:26:22.779080759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:26:23.836085 containerd[1608]: time="2025-12-12T17:26:23.835587916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.836524 containerd[1608]: time="2025-12-12T17:26:23.836481640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 12 17:26:23.838184 containerd[1608]: time="2025-12-12T17:26:23.837285685Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.840415 containerd[1608]: time="2025-12-12T17:26:23.840387360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.841454 containerd[1608]: time="2025-12-12T17:26:23.841218868Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.062102353s" Dec 12 17:26:23.841454 containerd[1608]: time="2025-12-12T17:26:23.841249956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:26:23.841874 containerd[1608]: time="2025-12-12T17:26:23.841830629Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:26:24.942726 containerd[1608]: time="2025-12-12T17:26:24.942644916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:24.943347 containerd[1608]: time="2025-12-12T17:26:24.943289222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 12 17:26:24.944509 containerd[1608]: time="2025-12-12T17:26:24.944462006Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:24.947408 containerd[1608]: time="2025-12-12T17:26:24.947365979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:24.948403 containerd[1608]: time="2025-12-12T17:26:24.948363288Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.106493633s" Dec 12 17:26:24.948451 containerd[1608]: time="2025-12-12T17:26:24.948408134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:26:24.949070 containerd[1608]: time="2025-12-12T17:26:24.948889675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:26:26.018365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637608867.mount: Deactivated successfully. Dec 12 17:26:26.252517 containerd[1608]: time="2025-12-12T17:26:26.252470385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:26.253267 containerd[1608]: time="2025-12-12T17:26:26.253217709Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Dec 12 17:26:26.254253 containerd[1608]: time="2025-12-12T17:26:26.254202908Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:26.255916 containerd[1608]: time="2025-12-12T17:26:26.255866251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:26.256609 containerd[1608]: time="2025-12-12T17:26:26.256580351Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.30766077s" Dec 12 17:26:26.256657 containerd[1608]: time="2025-12-12T17:26:26.256614407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:26:26.257223 containerd[1608]: time="2025-12-12T17:26:26.257199333Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:26:26.861991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855811323.mount: Deactivated successfully. Dec 12 17:26:27.426371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:26:27.427705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:27.458687 containerd[1608]: time="2025-12-12T17:26:27.458187496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:27.459383 containerd[1608]: time="2025-12-12T17:26:27.459346694Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 12 17:26:27.460547 containerd[1608]: time="2025-12-12T17:26:27.460518064Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:27.463628 containerd[1608]: time="2025-12-12T17:26:27.463589685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:27.464629 containerd[1608]: time="2025-12-12T17:26:27.464595082Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.207363861s" Dec 12 17:26:27.464808 containerd[1608]: time="2025-12-12T17:26:27.464711017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:26:27.465192 containerd[1608]: time="2025-12-12T17:26:27.465152160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:26:27.585880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:27.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.590145 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:26:27.590245 kernel: audit: type=1130 audit(1765560387.584:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:27.599411 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:27.632589 kubelet[2214]: E1212 17:26:27.632531 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:27.635944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:27.636066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:27.637261 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.3M memory peak. Dec 12 17:26:27.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:27.641179 kernel: audit: type=1131 audit(1765560387.636:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:27.973592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804821683.mount: Deactivated successfully. Dec 12 17:26:27.978085 containerd[1608]: time="2025-12-12T17:26:27.978033109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:27.979448 containerd[1608]: time="2025-12-12T17:26:27.979391546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:27.980373 containerd[1608]: time="2025-12-12T17:26:27.980349363Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:27.982241 containerd[1608]: time="2025-12-12T17:26:27.982192082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:27.983123 containerd[1608]: time="2025-12-12T17:26:27.983075120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.89362ms" Dec 12 17:26:27.983157 containerd[1608]: time="2025-12-12T17:26:27.983127735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:26:27.983782 containerd[1608]: time="2025-12-12T17:26:27.983758958Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:26:28.596178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114106202.mount: Deactivated successfully. Dec 12 17:26:30.545192 containerd[1608]: time="2025-12-12T17:26:30.545110888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:30.545780 containerd[1608]: time="2025-12-12T17:26:30.545726534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134789" Dec 12 17:26:30.546785 containerd[1608]: time="2025-12-12T17:26:30.546745361Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:30.549640 containerd[1608]: time="2025-12-12T17:26:30.549602777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:30.550698 containerd[1608]: time="2025-12-12T17:26:30.550672225Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.566879257s" Dec 12 17:26:30.550759 containerd[1608]: time="2025-12-12T17:26:30.550705589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:26:36.026273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:36.026848 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.3M memory peak. Dec 12 17:26:36.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.029183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:36.031621 kernel: audit: type=1130 audit(1765560396.025:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.031702 kernel: audit: type=1131 audit(1765560396.025:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.052887 systemd[1]: Reload requested from client PID 2312 ('systemctl') (unit session-7.scope)... Dec 12 17:26:36.052904 systemd[1]: Reloading... Dec 12 17:26:36.139154 zram_generator::config[2362]: No configuration found. Dec 12 17:26:36.331166 systemd[1]: Reloading finished in 277 ms. Dec 12 17:26:36.357456 kernel: audit: type=1334 audit(1765560396.355:282): prog-id=61 op=LOAD Dec 12 17:26:36.357540 kernel: audit: type=1334 audit(1765560396.356:283): prog-id=50 op=UNLOAD Dec 12 17:26:36.355000 audit: BPF prog-id=61 op=LOAD Dec 12 17:26:36.356000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:26:36.360147 kernel: audit: type=1334 audit(1765560396.357:284): prog-id=62 op=LOAD Dec 12 17:26:36.360233 kernel: audit: type=1334 audit(1765560396.357:285): prog-id=56 op=UNLOAD Dec 12 17:26:36.360253 kernel: audit: type=1334 audit(1765560396.357:286): prog-id=63 op=LOAD Dec 12 17:26:36.357000 audit: BPF prog-id=62 op=LOAD Dec 12 17:26:36.357000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:26:36.357000 audit: BPF prog-id=63 op=LOAD Dec 12 17:26:36.360549 kernel: audit: type=1334 audit(1765560396.357:287): prog-id=51 op=UNLOAD Dec 12 17:26:36.357000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:26:36.358000 audit: BPF prog-id=64 op=LOAD Dec 12 17:26:36.359000 audit: BPF prog-id=65 op=LOAD Dec 12 17:26:36.362142 kernel: audit: type=1334 audit(1765560396.358:288): prog-id=64 op=LOAD Dec 12 17:26:36.362168 kernel: audit: type=1334 audit(1765560396.359:289): prog-id=65 op=LOAD Dec 12 17:26:36.359000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:26:36.359000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:26:36.360000 audit: BPF prog-id=66 op=LOAD Dec 12 17:26:36.360000 audit: BPF prog-id=67 op=LOAD Dec 12 17:26:36.361000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:26:36.361000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:26:36.361000 audit: BPF prog-id=68 op=LOAD Dec 12 17:26:36.361000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:26:36.362000 audit: BPF prog-id=69 op=LOAD Dec 12 17:26:36.362000 audit: BPF prog-id=70 op=LOAD Dec 12 17:26:36.362000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:26:36.362000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:26:36.368000 audit: BPF prog-id=71 op=LOAD Dec 12 17:26:36.368000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=72 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=73 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=74 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=75 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=76 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=77 op=LOAD Dec 12 17:26:36.369000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:26:36.369000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:26:36.370000 audit: BPF prog-id=78 op=LOAD Dec 12 17:26:36.370000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:26:36.370000 audit: BPF prog-id=79 op=LOAD Dec 12 17:26:36.370000 audit: BPF prog-id=80 op=LOAD Dec 12 17:26:36.371000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:26:36.371000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:26:36.398753 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:26:36.398845 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:26:36.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:36.399174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:36.399233 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.1M memory peak. Dec 12 17:26:36.400799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:36.540898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:36.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:36.545162 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:36.578749 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:36.578749 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:36.578749 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:36.579077 kubelet[2404]: I1212 17:26:36.578790 2404 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:37.410722 kubelet[2404]: I1212 17:26:37.410671 2404 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:26:37.410722 kubelet[2404]: I1212 17:26:37.410708 2404 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:37.410959 kubelet[2404]: I1212 17:26:37.410931 2404 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:26:37.435423 kubelet[2404]: E1212 17:26:37.434657 2404 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:26:37.436265 kubelet[2404]: I1212 17:26:37.436240 2404 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:37.445537 kubelet[2404]: I1212 17:26:37.445509 2404 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:37.448136 kubelet[2404]: I1212 17:26:37.447921 2404 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:26:37.449023 kubelet[2404]: I1212 17:26:37.448966 2404 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:37.449218 kubelet[2404]: I1212 17:26:37.449013 2404 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:37.449364 kubelet[2404]: I1212 17:26:37.449291 2404 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:37.449364 kubelet[2404]: I1212 17:26:37.449301 2404 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:26:37.449507 kubelet[2404]: I1212 17:26:37.449492 2404 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:37.452001 kubelet[2404]: I1212 17:26:37.451972 2404 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:26:37.452001 kubelet[2404]: I1212 17:26:37.451997 2404 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:37.452082 kubelet[2404]: I1212 17:26:37.452024 2404 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:26:37.453126 kubelet[2404]: I1212 17:26:37.453035 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:37.454179 kubelet[2404]: I1212 17:26:37.454157 2404 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:37.454933 kubelet[2404]: I1212 17:26:37.454908 2404 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:26:37.455066 kubelet[2404]: W1212 17:26:37.455028 2404 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:26:37.455777 kubelet[2404]: E1212 17:26:37.455746 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:26:37.457395 kubelet[2404]: I1212 17:26:37.457375 2404 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:26:37.457461 kubelet[2404]: I1212 17:26:37.457415 2404 server.go:1289] "Started kubelet" Dec 12 17:26:37.457550 kubelet[2404]: I1212 17:26:37.457512 2404 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:37.458289 kubelet[2404]: E1212 17:26:37.458257 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:26:37.462830 kubelet[2404]: I1212 17:26:37.462804 2404 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:26:37.464336 kubelet[2404]: I1212 17:26:37.463374 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:37.464336 kubelet[2404]: I1212 17:26:37.463863 2404 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:37.464336 kubelet[2404]: E1212 17:26:37.462833 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188087d3bff1fbe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:26:37.45739056 +0000 UTC m=+0.908694075,LastTimestamp:2025-12-12 17:26:37.45739056 +0000 UTC m=+0.908694075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:26:37.465525 kubelet[2404]: E1212 17:26:37.465496 2404 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:37.466020 kubelet[2404]: I1212 17:26:37.465991 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:37.466466 kubelet[2404]: I1212 17:26:37.466432 2404 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:37.466507 kubelet[2404]: I1212 17:26:37.466472 2404 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:26:37.466605 kubelet[2404]: E1212 17:26:37.466586 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:26:37.466813 kubelet[2404]: I1212 17:26:37.466798 2404 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:26:37.466866 kubelet[2404]: I1212 17:26:37.466856 2404 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:26:37.468590 kubelet[2404]: E1212 17:26:37.468549 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Dec 12 17:26:37.468590 kubelet[2404]: E1212 17:26:37.468576 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:26:37.469969 kubelet[2404]: I1212 17:26:37.469922 2404 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:37.471753 kubelet[2404]: I1212 17:26:37.471713 2404 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:26:37.471753 kubelet[2404]: I1212 17:26:37.471752 2404 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:26:37.472000 audit[2422]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.472000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffda4e78c0 a2=0 a3=0 items=0 ppid=2404 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:37.473000 audit[2423]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.473000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed89fa10 a2=0 a3=0 items=0 ppid=2404 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:37.475000 audit[2425]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.475000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffda21e460 a2=0 a3=0 items=0 ppid=2404 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:37.477000 audit[2427]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.477000 audit[2427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc6def100 a2=0 a3=0 items=0 ppid=2404 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:37.484526 kubelet[2404]: I1212 17:26:37.484500 2404 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:37.484526 kubelet[2404]: I1212 17:26:37.484517 2404 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:37.484617 kubelet[2404]: I1212 17:26:37.484535 2404 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:37.484000 audit[2433]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.484000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc0cf2ae0 a2=0 a3=0 items=0 ppid=2404 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:26:37.485263 kubelet[2404]: I1212 17:26:37.485231 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:37.485000 audit[2434]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:37.485000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffc892480 a2=0 a3=0 items=0 ppid=2404 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:37.487000 audit[2435]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.487000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcdd13e10 a2=0 a3=0 items=0 ppid=2404 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:37.488000 audit[2436]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.488000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde409e10 a2=0 a3=0 items=0 ppid=2404 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:37.489000 audit[2437]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:37.489000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0f8a3a0 a2=0 a3=0 items=0 ppid=2404 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:37.490000 audit[2439]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:37.490000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe798f4d0 a2=0 a3=0 items=0 ppid=2404 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:37.491000 audit[2442]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:37.491000 audit[2442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda55e540 a2=0 a3=0 items=0 ppid=2404 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:37.492000 audit[2443]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:37.492000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedad3310 a2=0 a3=0 items=0 ppid=2404 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.486435 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.486454 2404 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.486471 2404 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.486478 2404 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:26:37.570434 kubelet[2404]: E1212 17:26:37.486518 2404 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:37.570434 kubelet[2404]: E1212 17:26:37.490194 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:26:37.570434 kubelet[2404]: E1212 17:26:37.566652 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.569680 2404 policy_none.go:49] "None policy: Start" Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.569708 2404 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:26:37.570434 kubelet[2404]: I1212 17:26:37.569722 2404 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:26:37.575577 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:26:37.587288 kubelet[2404]: E1212 17:26:37.587244 2404 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:26:37.593184 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:26:37.596142 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:26:37.613990 kubelet[2404]: E1212 17:26:37.613966 2404 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:26:37.614315 kubelet[2404]: I1212 17:26:37.614302 2404 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:37.614411 kubelet[2404]: I1212 17:26:37.614381 2404 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:37.614701 kubelet[2404]: I1212 17:26:37.614682 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:37.616390 kubelet[2404]: E1212 17:26:37.616368 2404 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:37.616457 kubelet[2404]: E1212 17:26:37.616407 2404 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:26:37.669550 kubelet[2404]: E1212 17:26:37.669454 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Dec 12 17:26:37.716825 kubelet[2404]: I1212 17:26:37.716785 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:26:37.717292 kubelet[2404]: E1212 17:26:37.717241 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Dec 12 17:26:37.795861 systemd[1]: Created slice kubepods-burstable-podd9b86e69fe480a7f0b36f01a7c11e6dc.slice - libcontainer container kubepods-burstable-podd9b86e69fe480a7f0b36f01a7c11e6dc.slice. Dec 12 17:26:37.809155 kubelet[2404]: E1212 17:26:37.809108 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:37.813496 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:26:37.834400 kubelet[2404]: E1212 17:26:37.834366 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:37.837227 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:26:37.839122 kubelet[2404]: E1212 17:26:37.839096 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:37.869365 kubelet[2404]: I1212 17:26:37.869332 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:37.869435 kubelet[2404]: I1212 17:26:37.869375 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:37.869435 kubelet[2404]: I1212 17:26:37.869394 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:37.869435 kubelet[2404]: I1212 17:26:37.869409 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:37.869435 kubelet[2404]: I1212 17:26:37.869424 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:37.869534 kubelet[2404]: I1212 17:26:37.869441 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:37.869534 kubelet[2404]: I1212 17:26:37.869454 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:37.869534 kubelet[2404]: I1212 17:26:37.869470 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:37.869534 kubelet[2404]: I1212 17:26:37.869493 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:37.918875 kubelet[2404]: I1212 17:26:37.918837 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:26:37.919263 kubelet[2404]: E1212 17:26:37.919230 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Dec 12 17:26:38.070565 kubelet[2404]: E1212 17:26:38.070459 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Dec 12 17:26:38.109671 kubelet[2404]: E1212 17:26:38.109628 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.110303 containerd[1608]: time="2025-12-12T17:26:38.110251032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9b86e69fe480a7f0b36f01a7c11e6dc,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:38.129887 containerd[1608]: time="2025-12-12T17:26:38.129844012Z" level=info msg="connecting to shim a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178" address="unix:///run/containerd/s/0a2c93ec0957ddc0ce8dd79d96e0d8e2258a98544092b9d69307b617ed5ee2db" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:38.135522 kubelet[2404]: E1212 17:26:38.135497 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.135994 containerd[1608]: time="2025-12-12T17:26:38.135953151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:38.140275 kubelet[2404]: E1212 17:26:38.140251 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.140668 containerd[1608]: time="2025-12-12T17:26:38.140638965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:38.155305 systemd[1]: Started cri-containerd-a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178.scope - libcontainer container a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178. Dec 12 17:26:38.166353 containerd[1608]: time="2025-12-12T17:26:38.166227832Z" level=info msg="connecting to shim 8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8" address="unix:///run/containerd/s/84d211da1cdfc27b30e450ad91b2516f7cde017b8c2fb26177b1fdc57f9a3b7d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:38.171314 containerd[1608]: time="2025-12-12T17:26:38.171272960Z" level=info msg="connecting to shim b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29" address="unix:///run/containerd/s/0b65b15d0c7767fa4e9786fa0906a5eac02078a699f0c3ddcadde24c86a5e784" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:38.178000 audit: BPF prog-id=81 op=LOAD Dec 12 17:26:38.179000 audit: BPF prog-id=82 op=LOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=83 op=LOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=84 op=LOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.179000 audit: BPF prog-id=85 op=LOAD Dec 12 17:26:38.179000 audit[2462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343963386338396537386434666131666265633566336464383631 Dec 12 17:26:38.196385 systemd[1]: Started cri-containerd-8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8.scope - libcontainer container 8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8. Dec 12 17:26:38.200086 systemd[1]: Started cri-containerd-b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29.scope - libcontainer container b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29. Dec 12 17:26:38.210688 containerd[1608]: time="2025-12-12T17:26:38.210637400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9b86e69fe480a7f0b36f01a7c11e6dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178\"" Dec 12 17:26:38.209000 audit: BPF prog-id=86 op=LOAD Dec 12 17:26:38.210000 audit: BPF prog-id=87 op=LOAD Dec 12 17:26:38.210000 audit[2513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.210000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:26:38.210000 audit[2513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=88 op=LOAD Dec 12 17:26:38.211000 audit[2513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=89 op=LOAD Dec 12 17:26:38.211000 audit[2513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:26:38.211000 audit[2513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:26:38.211000 audit[2513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=90 op=LOAD Dec 12 17:26:38.211000 audit[2513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2484 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835393366626530356633373834613435646464343135363265326338 Dec 12 17:26:38.211000 audit: BPF prog-id=91 op=LOAD Dec 12 17:26:38.212000 audit: BPF prog-id=92 op=LOAD Dec 12 17:26:38.212000 audit[2528]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=93 op=LOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=94 op=LOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.213000 audit: BPF prog-id=95 op=LOAD Dec 12 17:26:38.213000 audit[2528]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2509 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626262393565643638386562303234663262336166363534393934 Dec 12 17:26:38.215970 kubelet[2404]: E1212 17:26:38.212790 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.221607 containerd[1608]: time="2025-12-12T17:26:38.221507321Z" level=info msg="CreateContainer within sandbox \"a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:26:38.236356 containerd[1608]: time="2025-12-12T17:26:38.236309339Z" level=info msg="Container 837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:38.241007 containerd[1608]: time="2025-12-12T17:26:38.240964934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29\"" Dec 12 17:26:38.241640 kubelet[2404]: E1212 17:26:38.241616 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.249007 containerd[1608]: time="2025-12-12T17:26:38.248947768Z" level=info msg="CreateContainer within sandbox \"b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:26:38.250382 containerd[1608]: time="2025-12-12T17:26:38.250338558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8\"" Dec 12 17:26:38.251050 kubelet[2404]: E1212 17:26:38.251025 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.251439 containerd[1608]: time="2025-12-12T17:26:38.251403288Z" level=info msg="CreateContainer within sandbox \"a449c8c89e78d4fa1fbec5f3dd8618d7be7efb26538c8bddbafb63b64f0f9178\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648\"" Dec 12 17:26:38.251949 containerd[1608]: time="2025-12-12T17:26:38.251924674Z" level=info msg="StartContainer for \"837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648\"" Dec 12 17:26:38.253076 containerd[1608]: time="2025-12-12T17:26:38.253042776Z" level=info msg="connecting to shim 837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648" address="unix:///run/containerd/s/0a2c93ec0957ddc0ce8dd79d96e0d8e2258a98544092b9d69307b617ed5ee2db" protocol=ttrpc version=3 Dec 12 17:26:38.257635 containerd[1608]: time="2025-12-12T17:26:38.256748530Z" level=info msg="CreateContainer within sandbox \"8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:26:38.258332 containerd[1608]: time="2025-12-12T17:26:38.258296283Z" level=info msg="Container dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:38.263830 containerd[1608]: time="2025-12-12T17:26:38.263777251Z" level=info msg="Container d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:38.272122 containerd[1608]: time="2025-12-12T17:26:38.271990659Z" level=info msg="CreateContainer within sandbox \"b0bbb95ed688eb024f2b3af654994bdecb4ed92cecc9e2c74bf0910da46e4a29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073\"" Dec 12 17:26:38.272501 containerd[1608]: time="2025-12-12T17:26:38.272471767Z" level=info msg="StartContainer for \"dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073\"" Dec 12 17:26:38.273323 systemd[1]: Started cri-containerd-837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648.scope - libcontainer container 837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648. Dec 12 17:26:38.274525 containerd[1608]: time="2025-12-12T17:26:38.274310293Z" level=info msg="connecting to shim dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073" address="unix:///run/containerd/s/0b65b15d0c7767fa4e9786fa0906a5eac02078a699f0c3ddcadde24c86a5e784" protocol=ttrpc version=3 Dec 12 17:26:38.275635 containerd[1608]: time="2025-12-12T17:26:38.275605875Z" level=info msg="CreateContainer within sandbox \"8593fbe05f3784a45ddd41562e2c8abe85212b37d50a88f76871a52775a814d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46\"" Dec 12 17:26:38.276333 containerd[1608]: time="2025-12-12T17:26:38.276309255Z" level=info msg="StartContainer for \"d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46\"" Dec 12 17:26:38.278174 containerd[1608]: time="2025-12-12T17:26:38.278145864Z" level=info msg="connecting to shim d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46" address="unix:///run/containerd/s/84d211da1cdfc27b30e450ad91b2516f7cde017b8c2fb26177b1fdc57f9a3b7d" protocol=ttrpc version=3 Dec 12 17:26:38.297695 kubelet[2404]: E1212 17:26:38.297643 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:26:38.299354 systemd[1]: Started cri-containerd-dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073.scope - libcontainer container dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073. Dec 12 17:26:38.300000 audit: BPF prog-id=96 op=LOAD Dec 12 17:26:38.300000 audit: BPF prog-id=97 op=LOAD Dec 12 17:26:38.300000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.301000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:26:38.301000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.301000 audit: BPF prog-id=98 op=LOAD Dec 12 17:26:38.301000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.302000 audit: BPF prog-id=99 op=LOAD Dec 12 17:26:38.302000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.302000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:26:38.302000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.302000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:26:38.302000 audit[2582]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.302000 audit: BPF prog-id=100 op=LOAD Dec 12 17:26:38.302000 audit[2582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2452 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833376665376330643335633732613130363235373336633261653134 Dec 12 17:26:38.304331 systemd[1]: Started cri-containerd-d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46.scope - libcontainer container d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46. Dec 12 17:26:38.315000 audit: BPF prog-id=101 op=LOAD Dec 12 17:26:38.315000 audit: BPF prog-id=102 op=LOAD Dec 12 17:26:38.315000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.316000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:26:38.316000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.316000 audit: BPF prog-id=103 op=LOAD Dec 12 17:26:38.316000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.317000 audit: BPF prog-id=104 op=LOAD Dec 12 17:26:38.317000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.318000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:26:38.318000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.318000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:26:38.318000 audit[2600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.318000 audit: BPF prog-id=105 op=LOAD Dec 12 17:26:38.318000 audit[2600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2509 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303438346138623766303364316161303335316332386265633539 Dec 12 17:26:38.320829 kubelet[2404]: I1212 17:26:38.320717 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:26:38.321694 kubelet[2404]: E1212 17:26:38.321530 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Dec 12 17:26:38.327000 audit: BPF prog-id=106 op=LOAD Dec 12 17:26:38.327000 audit: BPF prog-id=107 op=LOAD Dec 12 17:26:38.327000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.327000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:26:38.327000 audit[2612]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.328000 audit: BPF prog-id=108 op=LOAD Dec 12 17:26:38.328000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.328000 audit: BPF prog-id=109 op=LOAD Dec 12 17:26:38.328000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.328000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:26:38.328000 audit[2612]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.328000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:26:38.328000 audit[2612]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.328000 audit: BPF prog-id=110 op=LOAD Dec 12 17:26:38.328000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2484 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:38.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393338323966666337636434306535303334616634363933323632 Dec 12 17:26:38.349245 containerd[1608]: time="2025-12-12T17:26:38.349205515Z" level=info msg="StartContainer for \"837fe7c0d35c72a10625736c2ae14f708cece948e82592ca539432a59d33d648\" returns successfully" Dec 12 17:26:38.360799 containerd[1608]: time="2025-12-12T17:26:38.360669397Z" level=info msg="StartContainer for \"d693829ffc7cd40e5034af4693262548fcef5d10d019bfcfcfc0a14c7399ff46\" returns successfully" Dec 12 17:26:38.364106 containerd[1608]: time="2025-12-12T17:26:38.364002105Z" level=info msg="StartContainer for \"dc0484a8b7f03d1aa0351c28bec596d82df59d8b1691c2e76708353e14b16073\" returns successfully" Dec 12 17:26:38.496839 kubelet[2404]: E1212 17:26:38.496797 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:38.496960 kubelet[2404]: E1212 17:26:38.496926 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.498669 kubelet[2404]: E1212 17:26:38.498643 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:38.498830 kubelet[2404]: E1212 17:26:38.498789 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:38.501945 kubelet[2404]: E1212 17:26:38.501790 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:38.501945 kubelet[2404]: E1212 17:26:38.501900 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:39.123566 kubelet[2404]: I1212 17:26:39.123530 2404 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:26:39.504644 kubelet[2404]: E1212 17:26:39.504551 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:39.504863 kubelet[2404]: E1212 17:26:39.504848 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:39.505261 kubelet[2404]: E1212 17:26:39.505246 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:26:39.505459 kubelet[2404]: E1212 17:26:39.505446 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:40.504140 kubelet[2404]: E1212 17:26:40.504064 2404 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:26:40.690092 kubelet[2404]: I1212 17:26:40.690054 2404 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:26:40.768397 kubelet[2404]: I1212 17:26:40.767520 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:40.774345 kubelet[2404]: E1212 17:26:40.774285 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:40.774345 kubelet[2404]: I1212 17:26:40.774321 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:40.776772 kubelet[2404]: E1212 17:26:40.776732 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:40.776772 kubelet[2404]: I1212 17:26:40.776765 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:40.779312 kubelet[2404]: E1212 17:26:40.779251 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:41.456455 kubelet[2404]: I1212 17:26:41.456414 2404 apiserver.go:52] "Watching apiserver" Dec 12 17:26:41.467627 kubelet[2404]: I1212 17:26:41.467588 2404 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:26:42.483366 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-7.scope)... Dec 12 17:26:42.483657 systemd[1]: Reloading... Dec 12 17:26:42.541132 kubelet[2404]: I1212 17:26:42.541065 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:42.552735 kubelet[2404]: E1212 17:26:42.552607 2404 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:42.558150 zram_generator::config[2739]: No configuration found. Dec 12 17:26:42.733138 systemd[1]: Reloading finished in 249 ms. Dec 12 17:26:42.752810 kubelet[2404]: I1212 17:26:42.751447 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:42.752953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:42.764045 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:26:42.764339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:42.764412 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 129.3M memory peak. Dec 12 17:26:42.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:42.765424 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:26:42.765478 kernel: audit: type=1131 audit(1765560402.763:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:42.766874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:42.766000 audit: BPF prog-id=111 op=LOAD Dec 12 17:26:42.766000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:26:42.770874 kernel: audit: type=1334 audit(1765560402.766:385): prog-id=111 op=LOAD Dec 12 17:26:42.770926 kernel: audit: type=1334 audit(1765560402.766:386): prog-id=61 op=UNLOAD Dec 12 17:26:42.770954 kernel: audit: type=1334 audit(1765560402.767:387): prog-id=112 op=LOAD Dec 12 17:26:42.767000 audit: BPF prog-id=112 op=LOAD Dec 12 17:26:42.772466 kernel: audit: type=1334 audit(1765560402.767:388): prog-id=78 op=UNLOAD Dec 12 17:26:42.772517 kernel: audit: type=1334 audit(1765560402.767:389): prog-id=113 op=LOAD Dec 12 17:26:42.772533 kernel: audit: type=1334 audit(1765560402.768:390): prog-id=114 op=LOAD Dec 12 17:26:42.772550 kernel: audit: type=1334 audit(1765560402.768:391): prog-id=79 op=UNLOAD Dec 12 17:26:42.772564 kernel: audit: type=1334 audit(1765560402.768:392): prog-id=80 op=UNLOAD Dec 12 17:26:42.772581 kernel: audit: type=1334 audit(1765560402.769:393): prog-id=115 op=LOAD Dec 12 17:26:42.767000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:26:42.767000 audit: BPF prog-id=113 op=LOAD Dec 12 17:26:42.768000 audit: BPF prog-id=114 op=LOAD Dec 12 17:26:42.768000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:26:42.768000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:26:42.769000 audit: BPF prog-id=115 op=LOAD Dec 12 17:26:42.769000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:26:42.770000 audit: BPF prog-id=116 op=LOAD Dec 12 17:26:42.771000 audit: BPF prog-id=117 op=LOAD Dec 12 17:26:42.771000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:26:42.771000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:26:42.773000 audit: BPF prog-id=118 op=LOAD Dec 12 17:26:42.773000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:26:42.773000 audit: BPF prog-id=119 op=LOAD Dec 12 17:26:42.774000 audit: BPF prog-id=120 op=LOAD Dec 12 17:26:42.774000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:26:42.774000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:26:42.775000 audit: BPF prog-id=121 op=LOAD Dec 12 17:26:42.775000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:26:42.795000 audit: BPF prog-id=122 op=LOAD Dec 12 17:26:42.795000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:26:42.795000 audit: BPF prog-id=123 op=LOAD Dec 12 17:26:42.796000 audit: BPF prog-id=124 op=LOAD Dec 12 17:26:42.796000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:26:42.796000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:26:42.796000 audit: BPF prog-id=125 op=LOAD Dec 12 17:26:42.796000 audit: BPF prog-id=126 op=LOAD Dec 12 17:26:42.796000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:26:42.796000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:26:42.796000 audit: BPF prog-id=127 op=LOAD Dec 12 17:26:42.796000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:26:42.797000 audit: BPF prog-id=128 op=LOAD Dec 12 17:26:42.797000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:26:42.797000 audit: BPF prog-id=129 op=LOAD Dec 12 17:26:42.797000 audit: BPF prog-id=130 op=LOAD Dec 12 17:26:42.797000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:26:42.797000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:26:42.931742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:42.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:42.936194 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:42.971797 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:42.971797 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:42.971797 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:42.972251 kubelet[2781]: I1212 17:26:42.971891 2781 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:42.978967 kubelet[2781]: I1212 17:26:42.978909 2781 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:26:42.978967 kubelet[2781]: I1212 17:26:42.978938 2781 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:42.979196 kubelet[2781]: I1212 17:26:42.979183 2781 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:26:42.980566 kubelet[2781]: I1212 17:26:42.980531 2781 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:26:42.983135 kubelet[2781]: I1212 17:26:42.983096 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:42.987150 kubelet[2781]: I1212 17:26:42.987033 2781 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:42.990544 kubelet[2781]: I1212 17:26:42.990504 2781 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:26:42.990767 kubelet[2781]: I1212 17:26:42.990719 2781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:42.991202 kubelet[2781]: I1212 17:26:42.990757 2781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:42.991284 kubelet[2781]: I1212 17:26:42.991223 2781 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:42.991284 kubelet[2781]: I1212 17:26:42.991238 2781 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:26:42.991284 kubelet[2781]: I1212 17:26:42.991289 2781 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:42.991466 kubelet[2781]: I1212 17:26:42.991456 2781 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:26:42.991493 kubelet[2781]: I1212 17:26:42.991473 2781 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:42.991518 kubelet[2781]: I1212 17:26:42.991496 2781 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:26:42.991518 kubelet[2781]: I1212 17:26:42.991509 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:42.993459 kubelet[2781]: I1212 17:26:42.993411 2781 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:42.994235 kubelet[2781]: I1212 17:26:42.994219 2781 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:26:42.998066 kubelet[2781]: I1212 17:26:42.998047 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:26:42.998224 kubelet[2781]: I1212 17:26:42.998212 2781 server.go:1289] "Started kubelet" Dec 12 17:26:42.999786 kubelet[2781]: I1212 17:26:42.999757 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:43.003270 kubelet[2781]: I1212 17:26:43.003143 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:43.006503 kubelet[2781]: E1212 17:26:43.006470 2781 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:26:43.006858 kubelet[2781]: I1212 17:26:43.006845 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:26:43.007182 kubelet[2781]: I1212 17:26:43.007164 2781 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:26:43.007343 kubelet[2781]: I1212 17:26:43.007334 2781 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:26:43.008398 kubelet[2781]: I1212 17:26:43.001260 2781 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:43.009252 kubelet[2781]: I1212 17:26:43.001361 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:43.009435 kubelet[2781]: I1212 17:26:43.009410 2781 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:43.009694 kubelet[2781]: I1212 17:26:43.009679 2781 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:26:43.010499 kubelet[2781]: I1212 17:26:43.010463 2781 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:26:43.010609 kubelet[2781]: I1212 17:26:43.010572 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:43.012131 kubelet[2781]: E1212 17:26:43.011403 2781 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:43.012131 kubelet[2781]: I1212 17:26:43.011515 2781 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:26:43.025974 kubelet[2781]: I1212 17:26:43.025723 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:43.028902 kubelet[2781]: I1212 17:26:43.028867 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:43.028902 kubelet[2781]: I1212 17:26:43.028902 2781 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:26:43.029038 kubelet[2781]: I1212 17:26:43.028922 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:43.029038 kubelet[2781]: I1212 17:26:43.028930 2781 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:26:43.029038 kubelet[2781]: E1212 17:26:43.028975 2781 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:43.049651 kubelet[2781]: I1212 17:26:43.049623 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:43.049651 kubelet[2781]: I1212 17:26:43.049643 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:43.049852 kubelet[2781]: I1212 17:26:43.049668 2781 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:43.049852 kubelet[2781]: I1212 17:26:43.049816 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:26:43.049852 kubelet[2781]: I1212 17:26:43.049827 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:26:43.049852 kubelet[2781]: I1212 17:26:43.049845 2781 policy_none.go:49] "None policy: Start" Dec 12 17:26:43.049852 kubelet[2781]: I1212 17:26:43.049853 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:26:43.050112 kubelet[2781]: I1212 17:26:43.049862 2781 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:26:43.050112 kubelet[2781]: I1212 17:26:43.049947 2781 state_mem.go:75] "Updated machine memory state" Dec 12 17:26:43.053918 kubelet[2781]: E1212 17:26:43.053890 2781 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:26:43.054692 kubelet[2781]: I1212 17:26:43.054071 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:43.054692 kubelet[2781]: I1212 17:26:43.054089 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:43.054692 kubelet[2781]: I1212 17:26:43.054325 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:43.056556 kubelet[2781]: E1212 17:26:43.056519 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:43.130528 kubelet[2781]: I1212 17:26:43.130473 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:43.130714 kubelet[2781]: I1212 17:26:43.130699 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:43.130842 kubelet[2781]: I1212 17:26:43.130795 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.137393 kubelet[2781]: E1212 17:26:43.137361 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:43.137605 kubelet[2781]: E1212 17:26:43.137587 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:43.159353 kubelet[2781]: I1212 17:26:43.159323 2781 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:26:43.165607 kubelet[2781]: I1212 17:26:43.165577 2781 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:26:43.165715 kubelet[2781]: I1212 17:26:43.165657 2781 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:26:43.208334 kubelet[2781]: I1212 17:26:43.208250 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.309008 kubelet[2781]: I1212 17:26:43.308874 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.309008 kubelet[2781]: I1212 17:26:43.308916 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:43.309008 kubelet[2781]: I1212 17:26:43.308968 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:43.309192 kubelet[2781]: I1212 17:26:43.309030 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:43.309192 kubelet[2781]: I1212 17:26:43.309074 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.309898 kubelet[2781]: I1212 17:26:43.309157 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9b86e69fe480a7f0b36f01a7c11e6dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9b86e69fe480a7f0b36f01a7c11e6dc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:26:43.309898 kubelet[2781]: I1212 17:26:43.309286 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.309898 kubelet[2781]: I1212 17:26:43.309308 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:43.437367 kubelet[2781]: E1212 17:26:43.437266 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:43.437692 kubelet[2781]: E1212 17:26:43.437556 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:43.438451 kubelet[2781]: E1212 17:26:43.438429 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:43.992626 kubelet[2781]: I1212 17:26:43.992574 2781 apiserver.go:52] "Watching apiserver" Dec 12 17:26:44.008634 kubelet[2781]: I1212 17:26:44.008594 2781 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:26:44.040616 kubelet[2781]: E1212 17:26:44.040576 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:44.041080 kubelet[2781]: I1212 17:26:44.041063 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:44.041333 kubelet[2781]: I1212 17:26:44.041319 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:44.048644 kubelet[2781]: E1212 17:26:44.048379 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:26:44.048644 kubelet[2781]: E1212 17:26:44.048630 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:44.049499 kubelet[2781]: E1212 17:26:44.049466 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:26:44.049631 kubelet[2781]: E1212 17:26:44.049614 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:44.065059 kubelet[2781]: I1212 17:26:44.064975 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.064960124 podStartE2EDuration="2.064960124s" podCreationTimestamp="2025-12-12 17:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:44.062264685 +0000 UTC m=+1.122890255" watchObservedRunningTime="2025-12-12 17:26:44.064960124 +0000 UTC m=+1.125585694" Dec 12 17:26:44.081814 kubelet[2781]: I1212 17:26:44.081028 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.081012107 podStartE2EDuration="2.081012107s" podCreationTimestamp="2025-12-12 17:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:44.07250577 +0000 UTC m=+1.133131380" watchObservedRunningTime="2025-12-12 17:26:44.081012107 +0000 UTC m=+1.141637677" Dec 12 17:26:44.093137 kubelet[2781]: I1212 17:26:44.092525 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.092506937 podStartE2EDuration="1.092506937s" podCreationTimestamp="2025-12-12 17:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:44.080987769 +0000 UTC m=+1.141613339" watchObservedRunningTime="2025-12-12 17:26:44.092506937 +0000 UTC m=+1.153132507" Dec 12 17:26:45.042059 kubelet[2781]: E1212 17:26:45.041979 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:45.043286 kubelet[2781]: E1212 17:26:45.042094 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:45.043286 kubelet[2781]: E1212 17:26:45.042369 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:46.043760 kubelet[2781]: E1212 17:26:46.043708 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:47.795087 kubelet[2781]: I1212 17:26:47.795047 2781 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:26:47.795531 containerd[1608]: time="2025-12-12T17:26:47.795469639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:26:47.795697 kubelet[2781]: I1212 17:26:47.795652 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:26:48.906963 systemd[1]: Created slice kubepods-besteffort-pod7d36b916_e7d8_42c5_a3d4_5206bbbbe63b.slice - libcontainer container kubepods-besteffort-pod7d36b916_e7d8_42c5_a3d4_5206bbbbe63b.slice. Dec 12 17:26:48.946517 kubelet[2781]: I1212 17:26:48.946461 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d36b916-e7d8-42c5-a3d4-5206bbbbe63b-lib-modules\") pod \"kube-proxy-7sk76\" (UID: \"7d36b916-e7d8-42c5-a3d4-5206bbbbe63b\") " pod="kube-system/kube-proxy-7sk76" Dec 12 17:26:48.946888 kubelet[2781]: I1212 17:26:48.946537 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9md97\" (UniqueName: \"kubernetes.io/projected/7d36b916-e7d8-42c5-a3d4-5206bbbbe63b-kube-api-access-9md97\") pod \"kube-proxy-7sk76\" (UID: \"7d36b916-e7d8-42c5-a3d4-5206bbbbe63b\") " pod="kube-system/kube-proxy-7sk76" Dec 12 17:26:48.946888 kubelet[2781]: I1212 17:26:48.946560 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d36b916-e7d8-42c5-a3d4-5206bbbbe63b-kube-proxy\") pod \"kube-proxy-7sk76\" (UID: \"7d36b916-e7d8-42c5-a3d4-5206bbbbe63b\") " pod="kube-system/kube-proxy-7sk76" Dec 12 17:26:48.946888 kubelet[2781]: I1212 17:26:48.946578 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d36b916-e7d8-42c5-a3d4-5206bbbbe63b-xtables-lock\") pod \"kube-proxy-7sk76\" (UID: \"7d36b916-e7d8-42c5-a3d4-5206bbbbe63b\") " pod="kube-system/kube-proxy-7sk76" Dec 12 17:26:49.078637 systemd[1]: Created slice kubepods-besteffort-poda02e47cf_1b5a_4000_9117_f754483c1bb4.slice - libcontainer container kubepods-besteffort-poda02e47cf_1b5a_4000_9117_f754483c1bb4.slice. Dec 12 17:26:49.149268 kubelet[2781]: I1212 17:26:49.149154 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a02e47cf-1b5a-4000-9117-f754483c1bb4-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hvrhq\" (UID: \"a02e47cf-1b5a-4000-9117-f754483c1bb4\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvrhq" Dec 12 17:26:49.149268 kubelet[2781]: I1212 17:26:49.149208 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpjr\" (UniqueName: \"kubernetes.io/projected/a02e47cf-1b5a-4000-9117-f754483c1bb4-kube-api-access-xtpjr\") pod \"tigera-operator-7dcd859c48-hvrhq\" (UID: \"a02e47cf-1b5a-4000-9117-f754483c1bb4\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvrhq" Dec 12 17:26:49.222545 kubelet[2781]: E1212 17:26:49.222409 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:49.223052 containerd[1608]: time="2025-12-12T17:26:49.222991412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sk76,Uid:7d36b916-e7d8-42c5-a3d4-5206bbbbe63b,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:49.242961 containerd[1608]: time="2025-12-12T17:26:49.242905159Z" level=info msg="connecting to shim 81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6" address="unix:///run/containerd/s/f62e0763b9640b1014969571f76fb0e97fc0a2d94041ea8e5f16df4c7fdac4c1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:49.266425 systemd[1]: Started cri-containerd-81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6.scope - libcontainer container 81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6. Dec 12 17:26:49.278000 audit: BPF prog-id=131 op=LOAD Dec 12 17:26:49.280536 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:26:49.280596 kernel: audit: type=1334 audit(1765560409.278:426): prog-id=131 op=LOAD Dec 12 17:26:49.280000 audit: BPF prog-id=132 op=LOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.286911 kernel: audit: type=1334 audit(1765560409.280:427): prog-id=132 op=LOAD Dec 12 17:26:49.287001 kernel: audit: type=1300 audit(1765560409.280:427): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.287026 kernel: audit: type=1327 audit(1765560409.280:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=132 op=UNLOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.294313 kernel: audit: type=1334 audit(1765560409.280:428): prog-id=132 op=UNLOAD Dec 12 17:26:49.294356 kernel: audit: type=1300 audit(1765560409.280:428): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.294382 kernel: audit: type=1327 audit(1765560409.280:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=133 op=LOAD Dec 12 17:26:49.298598 kernel: audit: type=1334 audit(1765560409.280:429): prog-id=133 op=LOAD Dec 12 17:26:49.298649 kernel: audit: type=1300 audit(1765560409.280:429): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.305937 kernel: audit: type=1327 audit(1765560409.280:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=134 op=LOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=133 op=UNLOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.280000 audit: BPF prog-id=135 op=LOAD Dec 12 17:26:49.280000 audit[2862]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2850 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663165383162666662396135316234353738636636383237633433 Dec 12 17:26:49.317070 containerd[1608]: time="2025-12-12T17:26:49.317033334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sk76,Uid:7d36b916-e7d8-42c5-a3d4-5206bbbbe63b,Namespace:kube-system,Attempt:0,} returns sandbox id \"81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6\"" Dec 12 17:26:49.318027 kubelet[2781]: E1212 17:26:49.317997 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:49.324865 containerd[1608]: time="2025-12-12T17:26:49.324828378Z" level=info msg="CreateContainer within sandbox \"81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:26:49.335192 containerd[1608]: time="2025-12-12T17:26:49.334629796Z" level=info msg="Container 9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:49.342374 containerd[1608]: time="2025-12-12T17:26:49.342257442Z" level=info msg="CreateContainer within sandbox \"81f1e81bffb9a51b4578cf6827c4389633444857622aec858e2ce44ce0ed07e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7\"" Dec 12 17:26:49.343089 containerd[1608]: time="2025-12-12T17:26:49.343057103Z" level=info msg="StartContainer for \"9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7\"" Dec 12 17:26:49.344995 containerd[1608]: time="2025-12-12T17:26:49.344894959Z" level=info msg="connecting to shim 9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7" address="unix:///run/containerd/s/f62e0763b9640b1014969571f76fb0e97fc0a2d94041ea8e5f16df4c7fdac4c1" protocol=ttrpc version=3 Dec 12 17:26:49.363394 systemd[1]: Started cri-containerd-9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7.scope - libcontainer container 9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7. Dec 12 17:26:49.382975 containerd[1608]: time="2025-12-12T17:26:49.382912042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvrhq,Uid:a02e47cf-1b5a-4000-9117-f754483c1bb4,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:26:49.402803 containerd[1608]: time="2025-12-12T17:26:49.402757853Z" level=info msg="connecting to shim fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a" address="unix:///run/containerd/s/92e66cfb480291c5c3ba1f299b014bf2544ad5d80744843df2fc592d77d06781" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:49.404000 audit: BPF prog-id=136 op=LOAD Dec 12 17:26:49.404000 audit[2889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2850 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936303033333264633137343931386532366363396231393739356533 Dec 12 17:26:49.405000 audit: BPF prog-id=137 op=LOAD Dec 12 17:26:49.405000 audit[2889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2850 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936303033333264633137343931386532366363396231393739356533 Dec 12 17:26:49.405000 audit: BPF prog-id=137 op=UNLOAD Dec 12 17:26:49.405000 audit[2889]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936303033333264633137343931386532366363396231393739356533 Dec 12 17:26:49.405000 audit: BPF prog-id=136 op=UNLOAD Dec 12 17:26:49.405000 audit[2889]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2850 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936303033333264633137343931386532366363396231393739356533 Dec 12 17:26:49.406000 audit: BPF prog-id=138 op=LOAD Dec 12 17:26:49.406000 audit[2889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2850 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936303033333264633137343931386532366363396231393739356533 Dec 12 17:26:49.428598 containerd[1608]: time="2025-12-12T17:26:49.428478354Z" level=info msg="StartContainer for \"9600332dc174918e26cc9b19795e3eba3f0d4b8f914e223e4d010fb3fefab3f7\" returns successfully" Dec 12 17:26:49.431562 systemd[1]: Started cri-containerd-fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a.scope - libcontainer container fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a. Dec 12 17:26:49.443000 audit: BPF prog-id=139 op=LOAD Dec 12 17:26:49.444000 audit: BPF prog-id=140 op=LOAD Dec 12 17:26:49.444000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.444000 audit: BPF prog-id=140 op=UNLOAD Dec 12 17:26:49.444000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.445000 audit: BPF prog-id=141 op=LOAD Dec 12 17:26:49.445000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.445000 audit: BPF prog-id=142 op=LOAD Dec 12 17:26:49.445000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.445000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:26:49.445000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.445000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:26:49.445000 audit[2928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.445000 audit: BPF prog-id=143 op=LOAD Dec 12 17:26:49.445000 audit[2928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2917 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664383231356538383865636230316432376634626235633132306463 Dec 12 17:26:49.479515 containerd[1608]: time="2025-12-12T17:26:49.479396796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvrhq,Uid:a02e47cf-1b5a-4000-9117-f754483c1bb4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a\"" Dec 12 17:26:49.481393 containerd[1608]: time="2025-12-12T17:26:49.481354960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:26:49.583000 audit[2999]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.583000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd549b340 a2=0 a3=1 items=0 ppid=2901 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:49.583000 audit[3000]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.583000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7bc7de0 a2=0 a3=1 items=0 ppid=2901 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:49.585000 audit[3002]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.585000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd3a8830 a2=0 a3=1 items=0 ppid=2901 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:49.585000 audit[3003]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.585000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf4d55b0 a2=0 a3=1 items=0 ppid=2901 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:49.587000 audit[3005]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.587000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd822de80 a2=0 a3=1 items=0 ppid=2901 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:49.588000 audit[3006]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.588000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbcf57f0 a2=0 a3=1 items=0 ppid=2901 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:49.688000 audit[3008]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.688000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcb743770 a2=0 a3=1 items=0 ppid=2901 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:49.691000 audit[3010]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.691000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffe3dccd0 a2=0 a3=1 items=0 ppid=2901 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:26:49.695000 audit[3013]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.695000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd5e981b0 a2=0 a3=1 items=0 ppid=2901 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:26:49.696000 audit[3014]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.696000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe24ed780 a2=0 a3=1 items=0 ppid=2901 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:49.699000 audit[3016]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.699000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe81e2da0 a2=0 a3=1 items=0 ppid=2901 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:49.700000 audit[3017]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.700000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9f19b50 a2=0 a3=1 items=0 ppid=2901 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:49.703000 audit[3019]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.703000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd0f19d30 a2=0 a3=1 items=0 ppid=2901 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:26:49.707000 audit[3022]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.707000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeffab590 a2=0 a3=1 items=0 ppid=2901 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:26:49.709000 audit[3023]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.709000 audit[3023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0804a20 a2=0 a3=1 items=0 ppid=2901 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:49.712000 audit[3025]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.712000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8d92450 a2=0 a3=1 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:49.713000 audit[3026]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.713000 audit[3026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed3f78f0 a2=0 a3=1 items=0 ppid=2901 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:49.716000 audit[3028]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.716000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe30244f0 a2=0 a3=1 items=0 ppid=2901 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:49.721000 audit[3031]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.721000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd31d1710 a2=0 a3=1 items=0 ppid=2901 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:49.725000 audit[3034]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.725000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4fce250 a2=0 a3=1 items=0 ppid=2901 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:26:49.726000 audit[3035]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.726000 audit[3035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe769aa40 a2=0 a3=1 items=0 ppid=2901 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:49.729000 audit[3037]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.729000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd9f45d20 a2=0 a3=1 items=0 ppid=2901 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:49.733000 audit[3040]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.733000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd28a9020 a2=0 a3=1 items=0 ppid=2901 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:49.734000 audit[3041]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.734000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4251120 a2=0 a3=1 items=0 ppid=2901 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:49.737000 audit[3043]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:49.737000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff44d80f0 a2=0 a3=1 items=0 ppid=2901 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:49.757000 audit[3049]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:49.757000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffff15f490 a2=0 a3=1 items=0 ppid=2901 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:49.770000 audit[3049]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:49.770000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffff15f490 a2=0 a3=1 items=0 ppid=2901 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:49.772000 audit[3054]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.772000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdcc2cec0 a2=0 a3=1 items=0 ppid=2901 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:49.775000 audit[3056]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.775000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe145f820 a2=0 a3=1 items=0 ppid=2901 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:26:49.779000 audit[3059]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.779000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffffdeebe0 a2=0 a3=1 items=0 ppid=2901 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:26:49.780000 audit[3060]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.780000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3e46530 a2=0 a3=1 items=0 ppid=2901 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:49.783000 audit[3062]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.783000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe76dc270 a2=0 a3=1 items=0 ppid=2901 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:49.784000 audit[3063]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.784000 audit[3063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee4190a0 a2=0 a3=1 items=0 ppid=2901 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:49.787000 audit[3065]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.787000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb52c490 a2=0 a3=1 items=0 ppid=2901 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:26:49.790000 audit[3068]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.790000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffac372e0 a2=0 a3=1 items=0 ppid=2901 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:26:49.792000 audit[3069]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.792000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff95cf280 a2=0 a3=1 items=0 ppid=2901 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:49.794000 audit[3071]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.794000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff99d4ca0 a2=0 a3=1 items=0 ppid=2901 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:49.795000 audit[3072]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.795000 audit[3072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee717ff0 a2=0 a3=1 items=0 ppid=2901 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:49.799000 audit[3074]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.799000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb993110 a2=0 a3=1 items=0 ppid=2901 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:49.803000 audit[3077]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.803000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe45e7430 a2=0 a3=1 items=0 ppid=2901 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:26:49.807000 audit[3080]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.807000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffefa3240 a2=0 a3=1 items=0 ppid=2901 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:26:49.808000 audit[3081]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.808000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa7ca6f0 a2=0 a3=1 items=0 ppid=2901 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:49.811000 audit[3083]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.811000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff5447490 a2=0 a3=1 items=0 ppid=2901 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:49.815000 audit[3086]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.815000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd090e260 a2=0 a3=1 items=0 ppid=2901 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:49.817000 audit[3087]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.817000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffdbab770 a2=0 a3=1 items=0 ppid=2901 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:49.819000 audit[3089]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.819000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe438a000 a2=0 a3=1 items=0 ppid=2901 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:49.821000 audit[3090]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.821000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc11eaf60 a2=0 a3=1 items=0 ppid=2901 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:49.824000 audit[3092]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.824000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe65ffff0 a2=0 a3=1 items=0 ppid=2901 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:49.828000 audit[3095]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:49.828000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd3406bf0 a2=0 a3=1 items=0 ppid=2901 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:49.831000 audit[3097]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:49.831000 audit[3097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffeed251c0 a2=0 a3=1 items=0 ppid=2901 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.831000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:49.832000 audit[3097]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:49.832000 audit[3097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffeed251c0 a2=0 a3=1 items=0 ppid=2901 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:49.832000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:49.900435 kubelet[2781]: E1212 17:26:49.900350 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:50.058859 kubelet[2781]: E1212 17:26:50.057688 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:50.063517 kubelet[2781]: E1212 17:26:50.060343 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:50.081340 kubelet[2781]: I1212 17:26:50.081277 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sk76" podStartSLOduration=2.081257845 podStartE2EDuration="2.081257845s" podCreationTimestamp="2025-12-12 17:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:50.069891333 +0000 UTC m=+7.130516903" watchObservedRunningTime="2025-12-12 17:26:50.081257845 +0000 UTC m=+7.141883375" Dec 12 17:26:50.464287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620576781.mount: Deactivated successfully. Dec 12 17:26:51.059742 kubelet[2781]: E1212 17:26:51.059412 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:51.152852 containerd[1608]: time="2025-12-12T17:26:51.152795947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:51.154332 containerd[1608]: time="2025-12-12T17:26:51.154283768Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:26:51.155385 containerd[1608]: time="2025-12-12T17:26:51.155355265Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:51.158371 containerd[1608]: time="2025-12-12T17:26:51.158324345Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:51.159565 containerd[1608]: time="2025-12-12T17:26:51.159534950Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.678142342s" Dec 12 17:26:51.159822 containerd[1608]: time="2025-12-12T17:26:51.159800644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:26:51.166204 containerd[1608]: time="2025-12-12T17:26:51.165852788Z" level=info msg="CreateContainer within sandbox \"fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:26:51.171776 containerd[1608]: time="2025-12-12T17:26:51.171706693Z" level=info msg="Container 677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:51.174708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028375488.mount: Deactivated successfully. Dec 12 17:26:51.177108 containerd[1608]: time="2025-12-12T17:26:51.177052014Z" level=info msg="CreateContainer within sandbox \"fd8215e888ecb01d27f4bb5c120dcae250a8fb518a05c8d711af8111809b736a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a\"" Dec 12 17:26:51.177788 containerd[1608]: time="2025-12-12T17:26:51.177645134Z" level=info msg="StartContainer for \"677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a\"" Dec 12 17:26:51.179955 containerd[1608]: time="2025-12-12T17:26:51.179913393Z" level=info msg="connecting to shim 677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a" address="unix:///run/containerd/s/92e66cfb480291c5c3ba1f299b014bf2544ad5d80744843df2fc592d77d06781" protocol=ttrpc version=3 Dec 12 17:26:51.229374 systemd[1]: Started cri-containerd-677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a.scope - libcontainer container 677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a. Dec 12 17:26:51.238000 audit: BPF prog-id=144 op=LOAD Dec 12 17:26:51.239000 audit: BPF prog-id=145 op=LOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=146 op=LOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=147 op=LOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.239000 audit: BPF prog-id=148 op=LOAD Dec 12 17:26:51.239000 audit[3106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2917 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376164333238353937656537616665353136663035333534396234 Dec 12 17:26:51.260679 containerd[1608]: time="2025-12-12T17:26:51.260595156Z" level=info msg="StartContainer for \"677ad328597ee7afe516f053549b4c9bd0766b5e465eb0ab07ad897eeb44492a\" returns successfully" Dec 12 17:26:53.131514 kubelet[2781]: E1212 17:26:53.131476 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:53.170219 kubelet[2781]: I1212 17:26:53.170150 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hvrhq" podStartSLOduration=2.488112221 podStartE2EDuration="4.170130441s" podCreationTimestamp="2025-12-12 17:26:49 +0000 UTC" firstStartedPulling="2025-12-12 17:26:49.480845204 +0000 UTC m=+6.541470814" lastFinishedPulling="2025-12-12 17:26:51.162863464 +0000 UTC m=+8.223489034" observedRunningTime="2025-12-12 17:26:52.07456728 +0000 UTC m=+9.135192850" watchObservedRunningTime="2025-12-12 17:26:53.170130441 +0000 UTC m=+10.230756011" Dec 12 17:26:54.072984 kubelet[2781]: E1212 17:26:54.072867 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:54.431982 kubelet[2781]: E1212 17:26:54.431524 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:55.070261 kubelet[2781]: E1212 17:26:55.070228 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:26:57.020446 sudo[1836]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:57.019000 audit[1836]: USER_END pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.021240 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:26:57.021306 kernel: audit: type=1106 audit(1765560417.019:506): pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.024176 sshd[1835]: Connection closed by 10.0.0.1 port 35038 Dec 12 17:26:57.019000 audit[1836]: CRED_DISP pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.027395 kernel: audit: type=1104 audit(1765560417.019:507): pid=1836 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.028314 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:57.030000 audit[1832]: USER_END pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:57.030000 audit[1832]: CRED_DISP pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:57.035579 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:35038.service: Deactivated successfully. Dec 12 17:26:57.037782 kernel: audit: type=1106 audit(1765560417.030:508): pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:57.037827 kernel: audit: type=1104 audit(1765560417.030:509): pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:26:57.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.57:22-10.0.0.1:35038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.038833 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:26:57.039092 systemd[1]: session-7.scope: Consumed 7.385s CPU time, 207.5M memory peak. Dec 12 17:26:57.041024 kernel: audit: type=1131 audit(1765560417.034:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.57:22-10.0.0.1:35038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:57.041717 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:26:57.043025 systemd-logind[1592]: Removed session 7. Dec 12 17:26:58.881000 audit[3200]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.881000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff5a97330 a2=0 a3=1 items=0 ppid=2901 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.888745 kernel: audit: type=1325 audit(1765560418.881:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.888978 kernel: audit: type=1300 audit(1765560418.881:511): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff5a97330 a2=0 a3=1 items=0 ppid=2901 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:58.891914 kernel: audit: type=1327 audit(1765560418.881:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:58.893000 audit[3200]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.893000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5a97330 a2=0 a3=1 items=0 ppid=2901 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.901096 kernel: audit: type=1325 audit(1765560418.893:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.901186 kernel: audit: type=1300 audit(1765560418.893:512): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5a97330 a2=0 a3=1 items=0 ppid=2901 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:58.940000 audit[3202]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.940000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe774bce0 a2=0 a3=1 items=0 ppid=2901 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:58.951000 audit[3202]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:58.951000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe774bce0 a2=0 a3=1 items=0 ppid=2901 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:00.524706 update_engine[1594]: I20251212 17:27:00.524634 1594 update_attempter.cc:509] Updating boot flags... Dec 12 17:27:01.989000 audit[3222]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.989000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe6a42c60 a2=0 a3=1 items=0 ppid=2901 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.001000 audit[3222]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.001000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6a42c60 a2=0 a3=1 items=0 ppid=2901 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.041000 audit[3224]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.044788 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 12 17:27:02.044830 kernel: audit: type=1325 audit(1765560422.041:517): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.041000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff56af970 a2=0 a3=1 items=0 ppid=2901 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.049830 kernel: audit: type=1300 audit(1765560422.041:517): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff56af970 a2=0 a3=1 items=0 ppid=2901 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.053051 kernel: audit: type=1327 audit(1765560422.041:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.049000 audit[3224]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.055500 kernel: audit: type=1325 audit(1765560422.049:518): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.049000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff56af970 a2=0 a3=1 items=0 ppid=2901 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.060874 kernel: audit: type=1300 audit(1765560422.049:518): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff56af970 a2=0 a3=1 items=0 ppid=2901 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.063751 kernel: audit: type=1327 audit(1765560422.049:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.062000 audit[3226]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.062000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde114180 a2=0 a3=1 items=0 ppid=2901 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.069785 kernel: audit: type=1325 audit(1765560423.062:519): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.069876 kernel: audit: type=1300 audit(1765560423.062:519): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde114180 a2=0 a3=1 items=0 ppid=2901 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.069896 kernel: audit: type=1327 audit(1765560423.062:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.069000 audit[3226]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.074042 kernel: audit: type=1325 audit(1765560423.069:520): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.069000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde114180 a2=0 a3=1 items=0 ppid=2901 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:05.309000 audit[3231]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:05.309000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe58aa870 a2=0 a3=1 items=0 ppid=2901 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:05.314000 audit[3231]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:05.314000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe58aa870 a2=0 a3=1 items=0 ppid=2901 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:05.333000 audit[3233]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:05.333000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff159ddd0 a2=0 a3=1 items=0 ppid=2901 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:05.342000 audit[3233]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:05.342000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff159ddd0 a2=0 a3=1 items=0 ppid=2901 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:05.350183 systemd[1]: Created slice kubepods-besteffort-pod92077568_059e_4882_95e2_329181bdd519.slice - libcontainer container kubepods-besteffort-pod92077568_059e_4882_95e2_329181bdd519.slice. Dec 12 17:27:05.363517 kubelet[2781]: I1212 17:27:05.363467 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdql\" (UniqueName: \"kubernetes.io/projected/92077568-059e-4882-95e2-329181bdd519-kube-api-access-5gdql\") pod \"calico-typha-6b9967d78b-wpn77\" (UID: \"92077568-059e-4882-95e2-329181bdd519\") " pod="calico-system/calico-typha-6b9967d78b-wpn77" Dec 12 17:27:05.363517 kubelet[2781]: I1212 17:27:05.363523 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/92077568-059e-4882-95e2-329181bdd519-typha-certs\") pod \"calico-typha-6b9967d78b-wpn77\" (UID: \"92077568-059e-4882-95e2-329181bdd519\") " pod="calico-system/calico-typha-6b9967d78b-wpn77" Dec 12 17:27:05.363885 kubelet[2781]: I1212 17:27:05.363549 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92077568-059e-4882-95e2-329181bdd519-tigera-ca-bundle\") pod \"calico-typha-6b9967d78b-wpn77\" (UID: \"92077568-059e-4882-95e2-329181bdd519\") " pod="calico-system/calico-typha-6b9967d78b-wpn77" Dec 12 17:27:05.527261 systemd[1]: Created slice kubepods-besteffort-pod468ed43c_9bd4_483b_b1da_af48f039caa4.slice - libcontainer container kubepods-besteffort-pod468ed43c_9bd4_483b_b1da_af48f039caa4.slice. Dec 12 17:27:05.564436 kubelet[2781]: I1212 17:27:05.564328 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/468ed43c-9bd4-483b-b1da-af48f039caa4-node-certs\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564436 kubelet[2781]: I1212 17:27:05.564371 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-xtables-lock\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564436 kubelet[2781]: I1212 17:27:05.564393 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-cni-log-dir\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564436 kubelet[2781]: I1212 17:27:05.564409 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-var-run-calico\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564436 kubelet[2781]: I1212 17:27:05.564426 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-flexvol-driver-host\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564788 kubelet[2781]: I1212 17:27:05.564443 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-lib-modules\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564788 kubelet[2781]: I1212 17:27:05.564458 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p92x\" (UniqueName: \"kubernetes.io/projected/468ed43c-9bd4-483b-b1da-af48f039caa4-kube-api-access-7p92x\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564788 kubelet[2781]: I1212 17:27:05.564473 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-var-lib-calico\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564788 kubelet[2781]: I1212 17:27:05.564488 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-cni-bin-dir\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564788 kubelet[2781]: I1212 17:27:05.564501 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-policysync\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564898 kubelet[2781]: I1212 17:27:05.564518 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/468ed43c-9bd4-483b-b1da-af48f039caa4-tigera-ca-bundle\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.564898 kubelet[2781]: I1212 17:27:05.564531 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/468ed43c-9bd4-483b-b1da-af48f039caa4-cni-net-dir\") pod \"calico-node-kf77f\" (UID: \"468ed43c-9bd4-483b-b1da-af48f039caa4\") " pod="calico-system/calico-node-kf77f" Dec 12 17:27:05.654370 kubelet[2781]: E1212 17:27:05.654326 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:05.655028 containerd[1608]: time="2025-12-12T17:27:05.654982832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9967d78b-wpn77,Uid:92077568-059e-4882-95e2-329181bdd519,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:05.672616 kubelet[2781]: E1212 17:27:05.671318 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.672616 kubelet[2781]: W1212 17:27:05.671343 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.676254 kubelet[2781]: E1212 17:27:05.676184 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.681236 kubelet[2781]: E1212 17:27:05.681209 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.681236 kubelet[2781]: W1212 17:27:05.681230 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.681337 kubelet[2781]: E1212 17:27:05.681249 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.708451 containerd[1608]: time="2025-12-12T17:27:05.708395346Z" level=info msg="connecting to shim c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd" address="unix:///run/containerd/s/32639cda62691c2d7ac91fb5cc5bff22e16ea4f16a4b510078913ea12230dfcb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:05.717776 kubelet[2781]: E1212 17:27:05.717616 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:05.756908 kubelet[2781]: E1212 17:27:05.756876 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.756908 kubelet[2781]: W1212 17:27:05.756902 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.757050 kubelet[2781]: E1212 17:27:05.756924 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.757100 kubelet[2781]: E1212 17:27:05.757086 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.760147 kubelet[2781]: W1212 17:27:05.757097 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.760260 kubelet[2781]: E1212 17:27:05.760156 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.760509 kubelet[2781]: E1212 17:27:05.760493 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.760585 kubelet[2781]: W1212 17:27:05.760509 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.760585 kubelet[2781]: E1212 17:27:05.760521 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.760782 kubelet[2781]: E1212 17:27:05.760751 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.760782 kubelet[2781]: W1212 17:27:05.760781 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.760856 kubelet[2781]: E1212 17:27:05.760791 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.760958 kubelet[2781]: E1212 17:27:05.760946 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761000 kubelet[2781]: W1212 17:27:05.760958 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761000 kubelet[2781]: E1212 17:27:05.760966 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761106 kubelet[2781]: E1212 17:27:05.761087 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761106 kubelet[2781]: W1212 17:27:05.761095 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761106 kubelet[2781]: E1212 17:27:05.761102 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761256 kubelet[2781]: E1212 17:27:05.761244 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761256 kubelet[2781]: W1212 17:27:05.761254 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761317 kubelet[2781]: E1212 17:27:05.761261 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761393 kubelet[2781]: E1212 17:27:05.761384 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761423 kubelet[2781]: W1212 17:27:05.761393 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761423 kubelet[2781]: E1212 17:27:05.761403 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761545 kubelet[2781]: E1212 17:27:05.761534 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761545 kubelet[2781]: W1212 17:27:05.761546 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761603 kubelet[2781]: E1212 17:27:05.761554 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761687 kubelet[2781]: E1212 17:27:05.761676 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761687 kubelet[2781]: W1212 17:27:05.761686 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761741 kubelet[2781]: E1212 17:27:05.761717 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761854 kubelet[2781]: E1212 17:27:05.761843 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761854 kubelet[2781]: W1212 17:27:05.761855 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.761911 kubelet[2781]: E1212 17:27:05.761862 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.761994 kubelet[2781]: E1212 17:27:05.761985 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.761994 kubelet[2781]: W1212 17:27:05.761994 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762051 kubelet[2781]: E1212 17:27:05.762002 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762160 kubelet[2781]: E1212 17:27:05.762148 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762160 kubelet[2781]: W1212 17:27:05.762159 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762217 kubelet[2781]: E1212 17:27:05.762168 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762340 kubelet[2781]: E1212 17:27:05.762316 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762366 kubelet[2781]: W1212 17:27:05.762341 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762366 kubelet[2781]: E1212 17:27:05.762351 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762510 kubelet[2781]: E1212 17:27:05.762500 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762510 kubelet[2781]: W1212 17:27:05.762510 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762558 kubelet[2781]: E1212 17:27:05.762521 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762660 kubelet[2781]: E1212 17:27:05.762649 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762660 kubelet[2781]: W1212 17:27:05.762660 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762709 kubelet[2781]: E1212 17:27:05.762667 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762823 kubelet[2781]: E1212 17:27:05.762811 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762848 kubelet[2781]: W1212 17:27:05.762823 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.762848 kubelet[2781]: E1212 17:27:05.762831 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.762960 kubelet[2781]: E1212 17:27:05.762950 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.762960 kubelet[2781]: W1212 17:27:05.762960 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.763014 kubelet[2781]: E1212 17:27:05.762968 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.763092 kubelet[2781]: E1212 17:27:05.763082 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.763092 kubelet[2781]: W1212 17:27:05.763091 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.763151 kubelet[2781]: E1212 17:27:05.763099 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.763258 kubelet[2781]: E1212 17:27:05.763247 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.763292 kubelet[2781]: W1212 17:27:05.763259 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.763292 kubelet[2781]: E1212 17:27:05.763268 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.763655 systemd[1]: Started cri-containerd-c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd.scope - libcontainer container c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd. Dec 12 17:27:05.765968 kubelet[2781]: E1212 17:27:05.765915 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.765968 kubelet[2781]: W1212 17:27:05.765933 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.765968 kubelet[2781]: E1212 17:27:05.765948 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.766271 kubelet[2781]: I1212 17:27:05.766252 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fefc895-2faa-4b6c-b800-5fdfceed3426-registration-dir\") pod \"csi-node-driver-ffhww\" (UID: \"5fefc895-2faa-4b6c-b800-5fdfceed3426\") " pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:05.766442 kubelet[2781]: E1212 17:27:05.766342 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.766524 kubelet[2781]: W1212 17:27:05.766481 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.766524 kubelet[2781]: E1212 17:27:05.766495 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.766814 kubelet[2781]: E1212 17:27:05.766780 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.766814 kubelet[2781]: W1212 17:27:05.766793 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.766814 kubelet[2781]: E1212 17:27:05.766803 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.767142 kubelet[2781]: E1212 17:27:05.767096 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.767142 kubelet[2781]: W1212 17:27:05.767109 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.767142 kubelet[2781]: E1212 17:27:05.767126 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.767266 kubelet[2781]: I1212 17:27:05.767253 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fefc895-2faa-4b6c-b800-5fdfceed3426-kubelet-dir\") pod \"csi-node-driver-ffhww\" (UID: \"5fefc895-2faa-4b6c-b800-5fdfceed3426\") " pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:05.767487 kubelet[2781]: E1212 17:27:05.767473 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.767564 kubelet[2781]: W1212 17:27:05.767550 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.767623 kubelet[2781]: E1212 17:27:05.767613 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.767965 kubelet[2781]: E1212 17:27:05.767885 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.767965 kubelet[2781]: W1212 17:27:05.767897 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.767965 kubelet[2781]: E1212 17:27:05.767911 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.768209 kubelet[2781]: E1212 17:27:05.768197 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.768376 kubelet[2781]: W1212 17:27:05.768271 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.768376 kubelet[2781]: E1212 17:27:05.768287 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.768376 kubelet[2781]: I1212 17:27:05.768310 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fefc895-2faa-4b6c-b800-5fdfceed3426-socket-dir\") pod \"csi-node-driver-ffhww\" (UID: \"5fefc895-2faa-4b6c-b800-5fdfceed3426\") " pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:05.768609 kubelet[2781]: E1212 17:27:05.768596 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.768706 kubelet[2781]: W1212 17:27:05.768677 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.768706 kubelet[2781]: E1212 17:27:05.768693 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.768801 kubelet[2781]: I1212 17:27:05.768787 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5fefc895-2faa-4b6c-b800-5fdfceed3426-varrun\") pod \"csi-node-driver-ffhww\" (UID: \"5fefc895-2faa-4b6c-b800-5fdfceed3426\") " pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:05.769052 kubelet[2781]: E1212 17:27:05.769011 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.769153 kubelet[2781]: W1212 17:27:05.769140 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.769205 kubelet[2781]: E1212 17:27:05.769195 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.769499 kubelet[2781]: E1212 17:27:05.769466 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.769499 kubelet[2781]: W1212 17:27:05.769478 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.769499 kubelet[2781]: E1212 17:27:05.769488 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.769921 kubelet[2781]: E1212 17:27:05.769886 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.769921 kubelet[2781]: W1212 17:27:05.769899 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.769921 kubelet[2781]: E1212 17:27:05.769909 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.770268 kubelet[2781]: E1212 17:27:05.770234 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.770268 kubelet[2781]: W1212 17:27:05.770247 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.770268 kubelet[2781]: E1212 17:27:05.770257 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.770621 kubelet[2781]: E1212 17:27:05.770583 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.770621 kubelet[2781]: W1212 17:27:05.770596 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.770621 kubelet[2781]: E1212 17:27:05.770607 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.770793 kubelet[2781]: I1212 17:27:05.770745 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/5fefc895-2faa-4b6c-b800-5fdfceed3426-kube-api-access-jgzfv\") pod \"csi-node-driver-ffhww\" (UID: \"5fefc895-2faa-4b6c-b800-5fdfceed3426\") " pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:05.771027 kubelet[2781]: E1212 17:27:05.771012 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.771100 kubelet[2781]: W1212 17:27:05.771087 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.771236 kubelet[2781]: E1212 17:27:05.771224 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.771487 kubelet[2781]: E1212 17:27:05.771475 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.771590 kubelet[2781]: W1212 17:27:05.771524 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.771590 kubelet[2781]: E1212 17:27:05.771538 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.774000 audit: BPF prog-id=149 op=LOAD Dec 12 17:27:05.775000 audit: BPF prog-id=150 op=LOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=151 op=LOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=152 op=LOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.775000 audit: BPF prog-id=153 op=LOAD Dec 12 17:27:05.775000 audit[3266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3249 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333261646363616431616632316266303663373762643933326666 Dec 12 17:27:05.830276 kubelet[2781]: E1212 17:27:05.830164 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:05.830970 containerd[1608]: time="2025-12-12T17:27:05.830754947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kf77f,Uid:468ed43c-9bd4-483b-b1da-af48f039caa4,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:05.872267 kubelet[2781]: E1212 17:27:05.872198 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.872267 kubelet[2781]: W1212 17:27:05.872224 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.872267 kubelet[2781]: E1212 17:27:05.872244 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.872739 kubelet[2781]: E1212 17:27:05.872663 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.872739 kubelet[2781]: W1212 17:27:05.872711 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.872739 kubelet[2781]: E1212 17:27:05.872724 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.873104 kubelet[2781]: E1212 17:27:05.873058 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.873104 kubelet[2781]: W1212 17:27:05.873070 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.873104 kubelet[2781]: E1212 17:27:05.873080 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.873343 kubelet[2781]: E1212 17:27:05.873325 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.873381 kubelet[2781]: W1212 17:27:05.873343 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.873381 kubelet[2781]: E1212 17:27:05.873356 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.873536 kubelet[2781]: E1212 17:27:05.873526 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.873536 kubelet[2781]: W1212 17:27:05.873536 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.873652 kubelet[2781]: E1212 17:27:05.873545 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.873712 kubelet[2781]: E1212 17:27:05.873700 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.873712 kubelet[2781]: W1212 17:27:05.873711 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.873763 kubelet[2781]: E1212 17:27:05.873721 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.873934 kubelet[2781]: E1212 17:27:05.873924 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.873934 kubelet[2781]: W1212 17:27:05.873934 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874013 kubelet[2781]: E1212 17:27:05.873942 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.874110 kubelet[2781]: E1212 17:27:05.874101 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.874110 kubelet[2781]: W1212 17:27:05.874110 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874198 kubelet[2781]: E1212 17:27:05.874128 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.874269 kubelet[2781]: E1212 17:27:05.874258 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.874269 kubelet[2781]: W1212 17:27:05.874268 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874343 kubelet[2781]: E1212 17:27:05.874275 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.874416 kubelet[2781]: E1212 17:27:05.874407 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.874416 kubelet[2781]: W1212 17:27:05.874416 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874494 kubelet[2781]: E1212 17:27:05.874423 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.874568 kubelet[2781]: E1212 17:27:05.874548 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.874568 kubelet[2781]: W1212 17:27:05.874555 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874568 kubelet[2781]: E1212 17:27:05.874561 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.874705 kubelet[2781]: E1212 17:27:05.874684 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.874705 kubelet[2781]: W1212 17:27:05.874691 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.874705 kubelet[2781]: E1212 17:27:05.874698 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.875063 kubelet[2781]: E1212 17:27:05.875005 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.875063 kubelet[2781]: W1212 17:27:05.875020 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.875063 kubelet[2781]: E1212 17:27:05.875031 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.875265 kubelet[2781]: E1212 17:27:05.875247 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.875265 kubelet[2781]: W1212 17:27:05.875260 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.875338 kubelet[2781]: E1212 17:27:05.875270 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.875412 kubelet[2781]: E1212 17:27:05.875402 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.875412 kubelet[2781]: W1212 17:27:05.875412 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.875487 kubelet[2781]: E1212 17:27:05.875420 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.875543 kubelet[2781]: E1212 17:27:05.875533 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.875543 kubelet[2781]: W1212 17:27:05.875542 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.875616 kubelet[2781]: E1212 17:27:05.875549 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.875711 kubelet[2781]: E1212 17:27:05.875701 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.875711 kubelet[2781]: W1212 17:27:05.875711 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.875773 kubelet[2781]: E1212 17:27:05.875718 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.876101 kubelet[2781]: E1212 17:27:05.876039 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.876101 kubelet[2781]: W1212 17:27:05.876052 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.876101 kubelet[2781]: E1212 17:27:05.876064 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.876422 kubelet[2781]: E1212 17:27:05.876387 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.876422 kubelet[2781]: W1212 17:27:05.876400 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.876422 kubelet[2781]: E1212 17:27:05.876410 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.876797 kubelet[2781]: E1212 17:27:05.876763 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.876797 kubelet[2781]: W1212 17:27:05.876776 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.876797 kubelet[2781]: E1212 17:27:05.876786 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.877112 kubelet[2781]: E1212 17:27:05.877075 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.877112 kubelet[2781]: W1212 17:27:05.877089 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.877112 kubelet[2781]: E1212 17:27:05.877099 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.877524 kubelet[2781]: E1212 17:27:05.877489 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.877524 kubelet[2781]: W1212 17:27:05.877502 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.877524 kubelet[2781]: E1212 17:27:05.877513 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.878039 kubelet[2781]: E1212 17:27:05.877975 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.878039 kubelet[2781]: W1212 17:27:05.877989 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.878039 kubelet[2781]: E1212 17:27:05.878001 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.878376 kubelet[2781]: E1212 17:27:05.878282 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.878376 kubelet[2781]: W1212 17:27:05.878295 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.878376 kubelet[2781]: E1212 17:27:05.878305 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.878652 kubelet[2781]: E1212 17:27:05.878630 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.878721 kubelet[2781]: W1212 17:27:05.878708 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.878805 kubelet[2781]: E1212 17:27:05.878794 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.889419 containerd[1608]: time="2025-12-12T17:27:05.889383096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9967d78b-wpn77,Uid:92077568-059e-4882-95e2-329181bdd519,Namespace:calico-system,Attempt:0,} returns sandbox id \"c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd\"" Dec 12 17:27:05.893954 kubelet[2781]: E1212 17:27:05.893879 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:05.897760 containerd[1608]: time="2025-12-12T17:27:05.897722159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:27:05.902397 kubelet[2781]: E1212 17:27:05.902372 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:05.902513 kubelet[2781]: W1212 17:27:05.902424 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:05.902513 kubelet[2781]: E1212 17:27:05.902445 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:05.939416 containerd[1608]: time="2025-12-12T17:27:05.939361790Z" level=info msg="connecting to shim 5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74" address="unix:///run/containerd/s/109fb3ba16683bd25cccbc2808b564120b2a9b8ae610be042ff5a1dad5c5609a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:05.964386 systemd[1]: Started cri-containerd-5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74.scope - libcontainer container 5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74. Dec 12 17:27:05.973000 audit: BPF prog-id=154 op=LOAD Dec 12 17:27:05.973000 audit: BPF prog-id=155 op=LOAD Dec 12 17:27:05.973000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.973000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:27:05.973000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.973000 audit: BPF prog-id=156 op=LOAD Dec 12 17:27:05.973000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.974000 audit: BPF prog-id=157 op=LOAD Dec 12 17:27:05.974000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.974000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:27:05.974000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.974000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:27:05.974000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.974000 audit: BPF prog-id=158 op=LOAD Dec 12 17:27:05.974000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3367 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:05.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562323033323362323363346435363337623233643835303562313032 Dec 12 17:27:05.998163 containerd[1608]: time="2025-12-12T17:27:05.997719072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kf77f,Uid:468ed43c-9bd4-483b-b1da-af48f039caa4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\"" Dec 12 17:27:06.001444 kubelet[2781]: E1212 17:27:06.001411 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:06.358000 audit[3404]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:06.358000 audit[3404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd798b930 a2=0 a3=1 items=0 ppid=2901 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:06.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:06.365000 audit[3404]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:06.365000 audit[3404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd798b930 a2=0 a3=1 items=0 ppid=2901 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:06.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:06.835278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412348873.mount: Deactivated successfully. Dec 12 17:27:07.029901 kubelet[2781]: E1212 17:27:07.029834 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:07.346365 containerd[1608]: time="2025-12-12T17:27:07.346300027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:07.346995 containerd[1608]: time="2025-12-12T17:27:07.346939445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:27:07.347849 containerd[1608]: time="2025-12-12T17:27:07.347806723Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:07.349875 containerd[1608]: time="2025-12-12T17:27:07.349846307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:07.350533 containerd[1608]: time="2025-12-12T17:27:07.350370675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.452606392s" Dec 12 17:27:07.350533 containerd[1608]: time="2025-12-12T17:27:07.350403518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:27:07.354084 containerd[1608]: time="2025-12-12T17:27:07.354053847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:27:07.366532 containerd[1608]: time="2025-12-12T17:27:07.366481848Z" level=info msg="CreateContainer within sandbox \"c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:27:07.376164 containerd[1608]: time="2025-12-12T17:27:07.375760085Z" level=info msg="Container 1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:07.382007 containerd[1608]: time="2025-12-12T17:27:07.381963684Z" level=info msg="CreateContainer within sandbox \"c532adccad1af21bf06c77bd932ff0c659e3c4c52037a1d872171c1d30ca29fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089\"" Dec 12 17:27:07.383357 containerd[1608]: time="2025-12-12T17:27:07.383319246Z" level=info msg="StartContainer for \"1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089\"" Dec 12 17:27:07.384678 containerd[1608]: time="2025-12-12T17:27:07.384648766Z" level=info msg="connecting to shim 1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089" address="unix:///run/containerd/s/32639cda62691c2d7ac91fb5cc5bff22e16ea4f16a4b510078913ea12230dfcb" protocol=ttrpc version=3 Dec 12 17:27:07.415386 systemd[1]: Started cri-containerd-1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089.scope - libcontainer container 1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089. Dec 12 17:27:07.428662 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 12 17:27:07.428775 kernel: audit: type=1334 audit(1765560427.426:543): prog-id=159 op=LOAD Dec 12 17:27:07.426000 audit: BPF prog-id=159 op=LOAD Dec 12 17:27:07.428000 audit: BPF prog-id=160 op=LOAD Dec 12 17:27:07.430907 kernel: audit: type=1334 audit(1765560427.428:544): prog-id=160 op=LOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.434597 kernel: audit: type=1300 audit(1765560427.428:544): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.438061 kernel: audit: type=1327 audit(1765560427.428:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.438175 kernel: audit: type=1334 audit(1765560427.428:545): prog-id=160 op=UNLOAD Dec 12 17:27:07.428000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:27:07.439157 kernel: audit: type=1300 audit(1765560427.428:545): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.445479 kernel: audit: type=1327 audit(1765560427.428:545): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.445552 kernel: audit: type=1334 audit(1765560427.428:546): prog-id=161 op=LOAD Dec 12 17:27:07.428000 audit: BPF prog-id=161 op=LOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.449439 kernel: audit: type=1300 audit(1765560427.428:546): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.449499 kernel: audit: type=1327 audit(1765560427.428:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.428000 audit: BPF prog-id=162 op=LOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.428000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.428000 audit: BPF prog-id=161 op=UNLOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.428000 audit: BPF prog-id=163 op=LOAD Dec 12 17:27:07.428000 audit[3415]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3249 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:07.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633532383765633461613262366130646564613334396230383939 Dec 12 17:27:07.492877 containerd[1608]: time="2025-12-12T17:27:07.492816202Z" level=info msg="StartContainer for \"1dc5287ec4aa2b6a0deda349b0899ae20d71416f89a5a51d3476e4139b2e2089\" returns successfully" Dec 12 17:27:08.119302 kubelet[2781]: E1212 17:27:08.119255 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:08.151069 kubelet[2781]: I1212 17:27:08.150713 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b9967d78b-wpn77" podStartSLOduration=1.689756403 podStartE2EDuration="3.148036929s" podCreationTimestamp="2025-12-12 17:27:05 +0000 UTC" firstStartedPulling="2025-12-12 17:27:05.895066617 +0000 UTC m=+22.955692187" lastFinishedPulling="2025-12-12 17:27:07.353347183 +0000 UTC m=+24.413972713" observedRunningTime="2025-12-12 17:27:08.134010798 +0000 UTC m=+25.194636368" watchObservedRunningTime="2025-12-12 17:27:08.148036929 +0000 UTC m=+25.208662459" Dec 12 17:27:08.161000 audit[3459]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:08.161000 audit[3459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd1b8dd20 a2=0 a3=1 items=0 ppid=2901 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:08.166000 audit[3459]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:08.166000 audit[3459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd1b8dd20 a2=0 a3=1 items=0 ppid=2901 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:08.180036 kubelet[2781]: E1212 17:27:08.179768 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.180557 kubelet[2781]: W1212 17:27:08.180355 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.180557 kubelet[2781]: E1212 17:27:08.180386 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.182313 kubelet[2781]: E1212 17:27:08.181891 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.182313 kubelet[2781]: W1212 17:27:08.181914 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.182313 kubelet[2781]: E1212 17:27:08.181931 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.183432 kubelet[2781]: E1212 17:27:08.182962 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.183432 kubelet[2781]: W1212 17:27:08.182977 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.183432 kubelet[2781]: E1212 17:27:08.182991 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.183765 kubelet[2781]: E1212 17:27:08.183695 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.183950 kubelet[2781]: W1212 17:27:08.183859 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.184360 kubelet[2781]: E1212 17:27:08.183990 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.184599 kubelet[2781]: E1212 17:27:08.184542 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.184696 kubelet[2781]: W1212 17:27:08.184578 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.184838 kubelet[2781]: E1212 17:27:08.184729 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.185290 kubelet[2781]: E1212 17:27:08.185181 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.185290 kubelet[2781]: W1212 17:27:08.185195 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.185290 kubelet[2781]: E1212 17:27:08.185206 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.186244 kubelet[2781]: E1212 17:27:08.186218 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.186335 kubelet[2781]: W1212 17:27:08.186307 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.186486 kubelet[2781]: E1212 17:27:08.186404 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.186723 kubelet[2781]: E1212 17:27:08.186698 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.186936 kubelet[2781]: W1212 17:27:08.186782 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.186936 kubelet[2781]: E1212 17:27:08.186806 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.187104 kubelet[2781]: E1212 17:27:08.187078 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.187174 kubelet[2781]: W1212 17:27:08.187162 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.187315 kubelet[2781]: E1212 17:27:08.187220 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.187534 kubelet[2781]: E1212 17:27:08.187478 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.187534 kubelet[2781]: W1212 17:27:08.187489 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.187534 kubelet[2781]: E1212 17:27:08.187499 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.187863 kubelet[2781]: E1212 17:27:08.187820 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.187863 kubelet[2781]: W1212 17:27:08.187832 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.188043 kubelet[2781]: E1212 17:27:08.187949 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.188589 kubelet[2781]: E1212 17:27:08.188564 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.188664 kubelet[2781]: W1212 17:27:08.188646 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.188732 kubelet[2781]: E1212 17:27:08.188719 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.189000 kubelet[2781]: E1212 17:27:08.188987 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.189000 kubelet[2781]: W1212 17:27:08.189022 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.189000 kubelet[2781]: E1212 17:27:08.189034 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.189429 kubelet[2781]: E1212 17:27:08.189406 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.189517 kubelet[2781]: W1212 17:27:08.189490 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.189572 kubelet[2781]: E1212 17:27:08.189562 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.189881 kubelet[2781]: E1212 17:27:08.189864 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.190026 kubelet[2781]: W1212 17:27:08.189949 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.190026 kubelet[2781]: E1212 17:27:08.189967 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.191035 kubelet[2781]: E1212 17:27:08.191018 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.191035 kubelet[2781]: W1212 17:27:08.191035 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.191098 kubelet[2781]: E1212 17:27:08.191048 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.191234 kubelet[2781]: E1212 17:27:08.191222 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.191234 kubelet[2781]: W1212 17:27:08.191233 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.191292 kubelet[2781]: E1212 17:27:08.191242 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.191443 kubelet[2781]: E1212 17:27:08.191431 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.191443 kubelet[2781]: W1212 17:27:08.191442 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.191494 kubelet[2781]: E1212 17:27:08.191452 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.191734 kubelet[2781]: E1212 17:27:08.191712 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.191776 kubelet[2781]: W1212 17:27:08.191734 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.191776 kubelet[2781]: E1212 17:27:08.191747 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.191954 kubelet[2781]: E1212 17:27:08.191943 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.191988 kubelet[2781]: W1212 17:27:08.191954 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.191988 kubelet[2781]: E1212 17:27:08.191963 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.192138 kubelet[2781]: E1212 17:27:08.192125 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.192183 kubelet[2781]: W1212 17:27:08.192138 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.192183 kubelet[2781]: E1212 17:27:08.192148 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.192322 kubelet[2781]: E1212 17:27:08.192308 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.192355 kubelet[2781]: W1212 17:27:08.192332 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.192355 kubelet[2781]: E1212 17:27:08.192345 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.192602 kubelet[2781]: E1212 17:27:08.192588 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.192602 kubelet[2781]: W1212 17:27:08.192601 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.192666 kubelet[2781]: E1212 17:27:08.192611 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.192792 kubelet[2781]: E1212 17:27:08.192782 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.192792 kubelet[2781]: W1212 17:27:08.192792 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.192855 kubelet[2781]: E1212 17:27:08.192800 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.193006 kubelet[2781]: E1212 17:27:08.192995 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.193038 kubelet[2781]: W1212 17:27:08.193006 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.193038 kubelet[2781]: E1212 17:27:08.193015 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.193188 kubelet[2781]: E1212 17:27:08.193178 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.193219 kubelet[2781]: W1212 17:27:08.193188 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.193219 kubelet[2781]: E1212 17:27:08.193197 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.193380 kubelet[2781]: E1212 17:27:08.193369 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.193380 kubelet[2781]: W1212 17:27:08.193379 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.193449 kubelet[2781]: E1212 17:27:08.193387 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.193637 kubelet[2781]: E1212 17:27:08.193622 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.193674 kubelet[2781]: W1212 17:27:08.193637 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.193674 kubelet[2781]: E1212 17:27:08.193649 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.193807 kubelet[2781]: E1212 17:27:08.193795 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.193807 kubelet[2781]: W1212 17:27:08.193805 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.193906 kubelet[2781]: E1212 17:27:08.193813 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.194144 kubelet[2781]: E1212 17:27:08.194095 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.194178 kubelet[2781]: W1212 17:27:08.194110 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.194178 kubelet[2781]: E1212 17:27:08.194164 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.194601 kubelet[2781]: E1212 17:27:08.194584 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.194601 kubelet[2781]: W1212 17:27:08.194602 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.194672 kubelet[2781]: E1212 17:27:08.194614 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.195764 kubelet[2781]: E1212 17:27:08.195743 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.195811 kubelet[2781]: W1212 17:27:08.195765 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.195811 kubelet[2781]: E1212 17:27:08.195781 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.197884 kubelet[2781]: E1212 17:27:08.197798 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:27:08.197884 kubelet[2781]: W1212 17:27:08.197880 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:27:08.197972 kubelet[2781]: E1212 17:27:08.197920 2781 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:27:08.387440 containerd[1608]: time="2025-12-12T17:27:08.387272179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:08.390395 containerd[1608]: time="2025-12-12T17:27:08.390236675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Dec 12 17:27:08.392291 containerd[1608]: time="2025-12-12T17:27:08.392210605Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:08.394924 containerd[1608]: time="2025-12-12T17:27:08.394868235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:08.395561 containerd[1608]: time="2025-12-12T17:27:08.395522011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.041433042s" Dec 12 17:27:08.395561 containerd[1608]: time="2025-12-12T17:27:08.395558975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:27:08.400156 containerd[1608]: time="2025-12-12T17:27:08.399130643Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:27:08.411173 containerd[1608]: time="2025-12-12T17:27:08.410362772Z" level=info msg="Container 14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:08.426567 containerd[1608]: time="2025-12-12T17:27:08.426519687Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc\"" Dec 12 17:27:08.428176 containerd[1608]: time="2025-12-12T17:27:08.427045692Z" level=info msg="StartContainer for \"14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc\"" Dec 12 17:27:08.430102 containerd[1608]: time="2025-12-12T17:27:08.430017669Z" level=info msg="connecting to shim 14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc" address="unix:///run/containerd/s/109fb3ba16683bd25cccbc2808b564120b2a9b8ae610be042ff5a1dad5c5609a" protocol=ttrpc version=3 Dec 12 17:27:08.463389 systemd[1]: Started cri-containerd-14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc.scope - libcontainer container 14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc. Dec 12 17:27:08.524000 audit: BPF prog-id=164 op=LOAD Dec 12 17:27:08.524000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3367 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134663835613735613133383465393830613136336162386338613164 Dec 12 17:27:08.524000 audit: BPF prog-id=165 op=LOAD Dec 12 17:27:08.524000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3367 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134663835613735613133383465393830613136336162386338613164 Dec 12 17:27:08.524000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:27:08.524000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134663835613735613133383465393830613136336162386338613164 Dec 12 17:27:08.525000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:27:08.525000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134663835613735613133383465393830613136336162386338613164 Dec 12 17:27:08.525000 audit: BPF prog-id=166 op=LOAD Dec 12 17:27:08.525000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3367 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:08.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134663835613735613133383465393830613136336162386338613164 Dec 12 17:27:08.550553 containerd[1608]: time="2025-12-12T17:27:08.550510910Z" level=info msg="StartContainer for \"14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc\" returns successfully" Dec 12 17:27:08.568580 systemd[1]: cri-containerd-14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc.scope: Deactivated successfully. Dec 12 17:27:08.574000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:27:08.598482 containerd[1608]: time="2025-12-12T17:27:08.598417965Z" level=info msg="received container exit event container_id:\"14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc\" id:\"14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc\" pid:3509 exited_at:{seconds:1765560428 nanos:591902242}" Dec 12 17:27:08.650555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14f85a75a1384e980a163ab8c8a1da85e66a28094b775c478cd5424839c52fcc-rootfs.mount: Deactivated successfully. Dec 12 17:27:09.033820 kubelet[2781]: E1212 17:27:09.033768 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:09.121340 kubelet[2781]: E1212 17:27:09.121307 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:09.122334 kubelet[2781]: E1212 17:27:09.121447 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:09.123338 containerd[1608]: time="2025-12-12T17:27:09.123303788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:27:10.124345 kubelet[2781]: E1212 17:27:10.124175 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:11.030975 kubelet[2781]: E1212 17:27:11.030936 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:11.183659 containerd[1608]: time="2025-12-12T17:27:11.183614766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:11.184611 containerd[1608]: time="2025-12-12T17:27:11.184206811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:27:11.185452 containerd[1608]: time="2025-12-12T17:27:11.185384821Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:11.188897 containerd[1608]: time="2025-12-12T17:27:11.188829603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:11.190142 containerd[1608]: time="2025-12-12T17:27:11.189806117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.066461486s" Dec 12 17:27:11.190142 containerd[1608]: time="2025-12-12T17:27:11.189842600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:27:11.194457 containerd[1608]: time="2025-12-12T17:27:11.194409347Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:27:11.204143 containerd[1608]: time="2025-12-12T17:27:11.202609171Z" level=info msg="Container a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:11.213356 containerd[1608]: time="2025-12-12T17:27:11.213302625Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908\"" Dec 12 17:27:11.215555 containerd[1608]: time="2025-12-12T17:27:11.213864867Z" level=info msg="StartContainer for \"a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908\"" Dec 12 17:27:11.223141 containerd[1608]: time="2025-12-12T17:27:11.221461325Z" level=info msg="connecting to shim a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908" address="unix:///run/containerd/s/109fb3ba16683bd25cccbc2808b564120b2a9b8ae610be042ff5a1dad5c5609a" protocol=ttrpc version=3 Dec 12 17:27:11.268933 systemd[1]: Started cri-containerd-a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908.scope - libcontainer container a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908. Dec 12 17:27:11.333000 audit: BPF prog-id=167 op=LOAD Dec 12 17:27:11.333000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3367 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666135643764326231306130366436646133613766623463613866 Dec 12 17:27:11.333000 audit: BPF prog-id=168 op=LOAD Dec 12 17:27:11.333000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3367 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666135643764326231306130366436646133613766623463613866 Dec 12 17:27:11.333000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:27:11.333000 audit[3558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666135643764326231306130366436646133613766623463613866 Dec 12 17:27:11.334000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:27:11.334000 audit[3558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666135643764326231306130366436646133613766623463613866 Dec 12 17:27:11.334000 audit: BPF prog-id=169 op=LOAD Dec 12 17:27:11.334000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3367 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:11.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666135643764326231306130366436646133613766623463613866 Dec 12 17:27:11.352834 containerd[1608]: time="2025-12-12T17:27:11.352783237Z" level=info msg="StartContainer for \"a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908\" returns successfully" Dec 12 17:27:11.903488 systemd[1]: cri-containerd-a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908.scope: Deactivated successfully. Dec 12 17:27:11.903920 systemd[1]: cri-containerd-a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908.scope: Consumed 480ms CPU time, 174.9M memory peak, 3.5M read from disk, 165.9M written to disk. Dec 12 17:27:11.905555 containerd[1608]: time="2025-12-12T17:27:11.905491850Z" level=info msg="received container exit event container_id:\"a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908\" id:\"a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908\" pid:3570 exited_at:{seconds:1765560431 nanos:905010893}" Dec 12 17:27:11.906660 containerd[1608]: time="2025-12-12T17:27:11.906619536Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:27:11.909000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:27:11.928978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5fa5d7d2b10a06d6da3a7fb4ca8f68072bbd41ac23f82cfb33711f7fdc64908-rootfs.mount: Deactivated successfully. Dec 12 17:27:11.999363 kubelet[2781]: I1212 17:27:11.999174 2781 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:27:12.045843 systemd[1]: Created slice kubepods-burstable-pod66cfa93c_689b_443a_b94e_5457d1ad65b4.slice - libcontainer container kubepods-burstable-pod66cfa93c_689b_443a_b94e_5457d1ad65b4.slice. Dec 12 17:27:12.056768 systemd[1]: Created slice kubepods-besteffort-pod72a9762c_3e28_4065_81d7_b33bc02428ff.slice - libcontainer container kubepods-besteffort-pod72a9762c_3e28_4065_81d7_b33bc02428ff.slice. Dec 12 17:27:12.066176 systemd[1]: Created slice kubepods-besteffort-pod40904382_6d66_45a7_b690_70451381869b.slice - libcontainer container kubepods-besteffort-pod40904382_6d66_45a7_b690_70451381869b.slice. Dec 12 17:27:12.073837 systemd[1]: Created slice kubepods-besteffort-pod1274eff9_19f2_4a62_a07e_e088770fa339.slice - libcontainer container kubepods-besteffort-pod1274eff9_19f2_4a62_a07e_e088770fa339.slice. Dec 12 17:27:12.081488 systemd[1]: Created slice kubepods-burstable-pod8244e43b_886c_44bc_ad6f_47d7edb4df86.slice - libcontainer container kubepods-burstable-pod8244e43b_886c_44bc_ad6f_47d7edb4df86.slice. Dec 12 17:27:12.092451 systemd[1]: Created slice kubepods-besteffort-pod79d51bdf_2c92_4c03_a442_924ffa919312.slice - libcontainer container kubepods-besteffort-pod79d51bdf_2c92_4c03_a442_924ffa919312.slice. Dec 12 17:27:12.097516 systemd[1]: Created slice kubepods-besteffort-pod15a11d37_fdc8_4b22_a7c5_9c4c4246dd24.slice - libcontainer container kubepods-besteffort-pod15a11d37_fdc8_4b22_a7c5_9c4c4246dd24.slice. Dec 12 17:27:12.123842 kubelet[2781]: I1212 17:27:12.123763 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15a11d37-fdc8-4b22-a7c5-9c4c4246dd24-goldmane-ca-bundle\") pod \"goldmane-666569f655-nnrnj\" (UID: \"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24\") " pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.123842 kubelet[2781]: I1212 17:27:12.123841 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40904382-6d66-45a7-b690-70451381869b-whisker-ca-bundle\") pod \"whisker-5fb4c56cb5-7sdzl\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " pod="calico-system/whisker-5fb4c56cb5-7sdzl" Dec 12 17:27:12.123842 kubelet[2781]: I1212 17:27:12.123863 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/15a11d37-fdc8-4b22-a7c5-9c4c4246dd24-goldmane-key-pair\") pod \"goldmane-666569f655-nnrnj\" (UID: \"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24\") " pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.123842 kubelet[2781]: I1212 17:27:12.123891 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtf8\" (UniqueName: \"kubernetes.io/projected/40904382-6d66-45a7-b690-70451381869b-kube-api-access-8wtf8\") pod \"whisker-5fb4c56cb5-7sdzl\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " pod="calico-system/whisker-5fb4c56cb5-7sdzl" Dec 12 17:27:12.124159 kubelet[2781]: I1212 17:27:12.123931 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shfcn\" (UniqueName: \"kubernetes.io/projected/1274eff9-19f2-4a62-a07e-e088770fa339-kube-api-access-shfcn\") pod \"calico-apiserver-5f88db5b6c-t62vd\" (UID: \"1274eff9-19f2-4a62-a07e-e088770fa339\") " pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" Dec 12 17:27:12.128132 kubelet[2781]: I1212 17:27:12.127985 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htq7\" (UniqueName: \"kubernetes.io/projected/72a9762c-3e28-4065-81d7-b33bc02428ff-kube-api-access-7htq7\") pod \"calico-kube-controllers-795ddb8d7d-wsdsj\" (UID: \"72a9762c-3e28-4065-81d7-b33bc02428ff\") " pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" Dec 12 17:27:12.128132 kubelet[2781]: I1212 17:27:12.128098 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpkf\" (UniqueName: \"kubernetes.io/projected/15a11d37-fdc8-4b22-a7c5-9c4c4246dd24-kube-api-access-lwpkf\") pod \"goldmane-666569f655-nnrnj\" (UID: \"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24\") " pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.128132 kubelet[2781]: I1212 17:27:12.128159 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66cfa93c-689b-443a-b94e-5457d1ad65b4-config-volume\") pod \"coredns-674b8bbfcf-9kkcr\" (UID: \"66cfa93c-689b-443a-b94e-5457d1ad65b4\") " pod="kube-system/coredns-674b8bbfcf-9kkcr" Dec 12 17:27:12.128328 kubelet[2781]: I1212 17:27:12.128181 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79d51bdf-2c92-4c03-a442-924ffa919312-calico-apiserver-certs\") pod \"calico-apiserver-5f88db5b6c-zlv8l\" (UID: \"79d51bdf-2c92-4c03-a442-924ffa919312\") " pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" Dec 12 17:27:12.128328 kubelet[2781]: I1212 17:27:12.128201 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40904382-6d66-45a7-b690-70451381869b-whisker-backend-key-pair\") pod \"whisker-5fb4c56cb5-7sdzl\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " pod="calico-system/whisker-5fb4c56cb5-7sdzl" Dec 12 17:27:12.128328 kubelet[2781]: I1212 17:27:12.128218 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8244e43b-886c-44bc-ad6f-47d7edb4df86-config-volume\") pod \"coredns-674b8bbfcf-cpxb4\" (UID: \"8244e43b-886c-44bc-ad6f-47d7edb4df86\") " pod="kube-system/coredns-674b8bbfcf-cpxb4" Dec 12 17:27:12.128328 kubelet[2781]: I1212 17:27:12.128265 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a11d37-fdc8-4b22-a7c5-9c4c4246dd24-config\") pod \"goldmane-666569f655-nnrnj\" (UID: \"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24\") " pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.128328 kubelet[2781]: I1212 17:27:12.128298 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9z6q\" (UniqueName: \"kubernetes.io/projected/79d51bdf-2c92-4c03-a442-924ffa919312-kube-api-access-s9z6q\") pod \"calico-apiserver-5f88db5b6c-zlv8l\" (UID: \"79d51bdf-2c92-4c03-a442-924ffa919312\") " pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" Dec 12 17:27:12.128526 kubelet[2781]: I1212 17:27:12.128318 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grt8t\" (UniqueName: \"kubernetes.io/projected/8244e43b-886c-44bc-ad6f-47d7edb4df86-kube-api-access-grt8t\") pod \"coredns-674b8bbfcf-cpxb4\" (UID: \"8244e43b-886c-44bc-ad6f-47d7edb4df86\") " pod="kube-system/coredns-674b8bbfcf-cpxb4" Dec 12 17:27:12.128526 kubelet[2781]: I1212 17:27:12.128363 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1274eff9-19f2-4a62-a07e-e088770fa339-calico-apiserver-certs\") pod \"calico-apiserver-5f88db5b6c-t62vd\" (UID: \"1274eff9-19f2-4a62-a07e-e088770fa339\") " pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" Dec 12 17:27:12.128526 kubelet[2781]: I1212 17:27:12.128384 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz84f\" (UniqueName: \"kubernetes.io/projected/66cfa93c-689b-443a-b94e-5457d1ad65b4-kube-api-access-lz84f\") pod \"coredns-674b8bbfcf-9kkcr\" (UID: \"66cfa93c-689b-443a-b94e-5457d1ad65b4\") " pod="kube-system/coredns-674b8bbfcf-9kkcr" Dec 12 17:27:12.128526 kubelet[2781]: I1212 17:27:12.128426 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72a9762c-3e28-4065-81d7-b33bc02428ff-tigera-ca-bundle\") pod \"calico-kube-controllers-795ddb8d7d-wsdsj\" (UID: \"72a9762c-3e28-4065-81d7-b33bc02428ff\") " pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" Dec 12 17:27:12.145219 kubelet[2781]: E1212 17:27:12.145187 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:12.147097 containerd[1608]: time="2025-12-12T17:27:12.147063833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:27:12.352672 kubelet[2781]: E1212 17:27:12.352641 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:12.359231 containerd[1608]: time="2025-12-12T17:27:12.356793162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9kkcr,Uid:66cfa93c-689b-443a-b94e-5457d1ad65b4,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:12.362286 containerd[1608]: time="2025-12-12T17:27:12.362201437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795ddb8d7d-wsdsj,Uid:72a9762c-3e28-4065-81d7-b33bc02428ff,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:12.371945 containerd[1608]: time="2025-12-12T17:27:12.371911227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb4c56cb5-7sdzl,Uid:40904382-6d66-45a7-b690-70451381869b,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:12.379151 containerd[1608]: time="2025-12-12T17:27:12.377422430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-t62vd,Uid:1274eff9-19f2-4a62-a07e-e088770fa339,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:12.388656 kubelet[2781]: E1212 17:27:12.388619 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:12.390698 containerd[1608]: time="2025-12-12T17:27:12.390644316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpxb4,Uid:8244e43b-886c-44bc-ad6f-47d7edb4df86,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:12.400330 containerd[1608]: time="2025-12-12T17:27:12.397367968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-zlv8l,Uid:79d51bdf-2c92-4c03-a442-924ffa919312,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:12.406085 containerd[1608]: time="2025-12-12T17:27:12.406022880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nnrnj,Uid:15a11d37-fdc8-4b22-a7c5-9c4c4246dd24,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:12.492679 containerd[1608]: time="2025-12-12T17:27:12.492631851Z" level=error msg="Failed to destroy network for sandbox \"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.493028 containerd[1608]: time="2025-12-12T17:27:12.492999518Z" level=error msg="Failed to destroy network for sandbox \"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.495573 containerd[1608]: time="2025-12-12T17:27:12.495527862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-t62vd,Uid:1274eff9-19f2-4a62-a07e-e088770fa339,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.496462 containerd[1608]: time="2025-12-12T17:27:12.496414127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9kkcr,Uid:66cfa93c-689b-443a-b94e-5457d1ad65b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.498301 kubelet[2781]: E1212 17:27:12.498242 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.498423 kubelet[2781]: E1212 17:27:12.498325 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" Dec 12 17:27:12.498423 kubelet[2781]: E1212 17:27:12.498346 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" Dec 12 17:27:12.498423 kubelet[2781]: E1212 17:27:12.498397 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f88db5b6c-t62vd_calico-apiserver(1274eff9-19f2-4a62-a07e-e088770fa339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f88db5b6c-t62vd_calico-apiserver(1274eff9-19f2-4a62-a07e-e088770fa339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edc1fca5c64c0385d2e89a5a20a2c3c100d26d519521e8a5086ff25913051ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:12.498569 kubelet[2781]: E1212 17:27:12.498522 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.498606 kubelet[2781]: E1212 17:27:12.498584 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9kkcr" Dec 12 17:27:12.498606 kubelet[2781]: E1212 17:27:12.498601 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9kkcr" Dec 12 17:27:12.498700 kubelet[2781]: E1212 17:27:12.498636 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9kkcr_kube-system(66cfa93c-689b-443a-b94e-5457d1ad65b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9kkcr_kube-system(66cfa93c-689b-443a-b94e-5457d1ad65b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b92e5bc79ff7fe5453ddfb7e8f069dfb96dca52124f410ca3bd4eea54a698c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9kkcr" podUID="66cfa93c-689b-443a-b94e-5457d1ad65b4" Dec 12 17:27:12.509423 containerd[1608]: time="2025-12-12T17:27:12.509346352Z" level=error msg="Failed to destroy network for sandbox \"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.512980 containerd[1608]: time="2025-12-12T17:27:12.512932295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-zlv8l,Uid:79d51bdf-2c92-4c03-a442-924ffa919312,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.513276 kubelet[2781]: E1212 17:27:12.513238 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.513334 kubelet[2781]: E1212 17:27:12.513296 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" Dec 12 17:27:12.513334 kubelet[2781]: E1212 17:27:12.513319 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" Dec 12 17:27:12.513396 kubelet[2781]: E1212 17:27:12.513369 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f88db5b6c-zlv8l_calico-apiserver(79d51bdf-2c92-4c03-a442-924ffa919312)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f88db5b6c-zlv8l_calico-apiserver(79d51bdf-2c92-4c03-a442-924ffa919312)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d358f6d33e101e084ba78fe670b6f84a5348884b36e924cbc95cdda64fb9d329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:12.515106 containerd[1608]: time="2025-12-12T17:27:12.515066211Z" level=error msg="Failed to destroy network for sandbox \"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.518436 containerd[1608]: time="2025-12-12T17:27:12.518392254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpxb4,Uid:8244e43b-886c-44bc-ad6f-47d7edb4df86,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.518984 kubelet[2781]: E1212 17:27:12.518589 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.518984 kubelet[2781]: E1212 17:27:12.518646 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpxb4" Dec 12 17:27:12.518984 kubelet[2781]: E1212 17:27:12.518667 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpxb4" Dec 12 17:27:12.519086 kubelet[2781]: E1212 17:27:12.518709 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cpxb4_kube-system(8244e43b-886c-44bc-ad6f-47d7edb4df86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cpxb4_kube-system(8244e43b-886c-44bc-ad6f-47d7edb4df86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3155778515d1b5604d650fd924f04ba186328da17586f7e3ce7f6f4d129bbed4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cpxb4" podUID="8244e43b-886c-44bc-ad6f-47d7edb4df86" Dec 12 17:27:12.524009 containerd[1608]: time="2025-12-12T17:27:12.523937619Z" level=error msg="Failed to destroy network for sandbox \"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.525948 containerd[1608]: time="2025-12-12T17:27:12.525904123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795ddb8d7d-wsdsj,Uid:72a9762c-3e28-4065-81d7-b33bc02428ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.526486 kubelet[2781]: E1212 17:27:12.526371 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.526486 kubelet[2781]: E1212 17:27:12.526431 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" Dec 12 17:27:12.526486 kubelet[2781]: E1212 17:27:12.526452 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" Dec 12 17:27:12.529800 kubelet[2781]: E1212 17:27:12.526641 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-795ddb8d7d-wsdsj_calico-system(72a9762c-3e28-4065-81d7-b33bc02428ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-795ddb8d7d-wsdsj_calico-system(72a9762c-3e28-4065-81d7-b33bc02428ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"073db4460edc794a0bf580b292e46de861a84e16069f333cc2df4e097f84e785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:12.533991 containerd[1608]: time="2025-12-12T17:27:12.533936310Z" level=error msg="Failed to destroy network for sandbox \"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.534091 containerd[1608]: time="2025-12-12T17:27:12.533935870Z" level=error msg="Failed to destroy network for sandbox \"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.535745 containerd[1608]: time="2025-12-12T17:27:12.535700639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nnrnj,Uid:15a11d37-fdc8-4b22-a7c5-9c4c4246dd24,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.536061 kubelet[2781]: E1212 17:27:12.535955 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.536061 kubelet[2781]: E1212 17:27:12.536005 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.536061 kubelet[2781]: E1212 17:27:12.536028 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nnrnj" Dec 12 17:27:12.536497 kubelet[2781]: E1212 17:27:12.536248 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nnrnj_calico-system(15a11d37-fdc8-4b22-a7c5-9c4c4246dd24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nnrnj_calico-system(15a11d37-fdc8-4b22-a7c5-9c4c4246dd24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad4bfae4f004e730b256df22e342f2da9a7f225596f5e760c8ec744ae7f8461\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:12.537598 containerd[1608]: time="2025-12-12T17:27:12.537549414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb4c56cb5-7sdzl,Uid:40904382-6d66-45a7-b690-70451381869b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.538195 kubelet[2781]: E1212 17:27:12.538159 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:12.538294 kubelet[2781]: E1212 17:27:12.538275 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb4c56cb5-7sdzl" Dec 12 17:27:12.538392 kubelet[2781]: E1212 17:27:12.538354 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb4c56cb5-7sdzl" Dec 12 17:27:12.538507 kubelet[2781]: E1212 17:27:12.538483 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fb4c56cb5-7sdzl_calico-system(40904382-6d66-45a7-b690-70451381869b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fb4c56cb5-7sdzl_calico-system(40904382-6d66-45a7-b690-70451381869b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28c14e53935f2c3af980c710126e0fb763b637175675fcde0c3493c1d60be80c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fb4c56cb5-7sdzl" podUID="40904382-6d66-45a7-b690-70451381869b" Dec 12 17:27:13.037183 systemd[1]: Created slice kubepods-besteffort-pod5fefc895_2faa_4b6c_b800_5fdfceed3426.slice - libcontainer container kubepods-besteffort-pod5fefc895_2faa_4b6c_b800_5fdfceed3426.slice. Dec 12 17:27:13.040432 containerd[1608]: time="2025-12-12T17:27:13.040398657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffhww,Uid:5fefc895-2faa-4b6c-b800-5fdfceed3426,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:13.095607 containerd[1608]: time="2025-12-12T17:27:13.095557414Z" level=error msg="Failed to destroy network for sandbox \"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:13.097545 containerd[1608]: time="2025-12-12T17:27:13.097509471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffhww,Uid:5fefc895-2faa-4b6c-b800-5fdfceed3426,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:13.097846 kubelet[2781]: E1212 17:27:13.097803 2781 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:27:13.099232 kubelet[2781]: E1212 17:27:13.098226 2781 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:13.099232 kubelet[2781]: E1212 17:27:13.098263 2781 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffhww" Dec 12 17:27:13.099232 kubelet[2781]: E1212 17:27:13.098317 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08f48117a27c89117d32b41671796a18e64dbd40a5cbfe3cf6327f96520ebf73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:15.902765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1599485907.mount: Deactivated successfully. Dec 12 17:27:16.054851 containerd[1608]: time="2025-12-12T17:27:16.054228840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:16.054851 containerd[1608]: time="2025-12-12T17:27:16.054806636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:27:16.055782 containerd[1608]: time="2025-12-12T17:27:16.055753536Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:16.063424 containerd[1608]: time="2025-12-12T17:27:16.063358974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:27:16.064195 containerd[1608]: time="2025-12-12T17:27:16.064149784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.916807091s" Dec 12 17:27:16.064366 containerd[1608]: time="2025-12-12T17:27:16.064182386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:27:16.088405 containerd[1608]: time="2025-12-12T17:27:16.088348665Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:27:16.096988 containerd[1608]: time="2025-12-12T17:27:16.096952766Z" level=info msg="Container 4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:16.105961 containerd[1608]: time="2025-12-12T17:27:16.105913650Z" level=info msg="CreateContainer within sandbox \"5b20323b23c4d5637b23d8505b102549f5b4c858f7fd49eb555cae8994286e74\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a\"" Dec 12 17:27:16.106863 containerd[1608]: time="2025-12-12T17:27:16.106820707Z" level=info msg="StartContainer for \"4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a\"" Dec 12 17:27:16.111412 containerd[1608]: time="2025-12-12T17:27:16.110047030Z" level=info msg="connecting to shim 4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a" address="unix:///run/containerd/s/109fb3ba16683bd25cccbc2808b564120b2a9b8ae610be042ff5a1dad5c5609a" protocol=ttrpc version=3 Dec 12 17:27:16.132324 systemd[1]: Started cri-containerd-4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a.scope - libcontainer container 4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a. Dec 12 17:27:16.206000 audit: BPF prog-id=170 op=LOAD Dec 12 17:27:16.209474 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 12 17:27:16.209608 kernel: audit: type=1334 audit(1765560436.206:565): prog-id=170 op=LOAD Dec 12 17:27:16.206000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.213249 kernel: audit: type=1300 audit(1765560436.206:565): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.213333 kernel: audit: type=1327 audit(1765560436.206:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: BPF prog-id=171 op=LOAD Dec 12 17:27:16.217369 kernel: audit: type=1334 audit(1765560436.208:566): prog-id=171 op=LOAD Dec 12 17:27:16.208000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.221002 kernel: audit: type=1300 audit(1765560436.208:566): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.221149 kernel: audit: type=1327 audit(1765560436.208:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:27:16.225524 kernel: audit: type=1334 audit(1765560436.208:567): prog-id=171 op=UNLOAD Dec 12 17:27:16.208000 audit[3886]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.229163 kernel: audit: type=1300 audit(1765560436.208:567): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.229222 kernel: audit: type=1327 audit(1765560436.208:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:27:16.232809 kernel: audit: type=1334 audit(1765560436.208:568): prog-id=170 op=UNLOAD Dec 12 17:27:16.208000 audit[3886]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.208000 audit: BPF prog-id=172 op=LOAD Dec 12 17:27:16.208000 audit[3886]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3367 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:16.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663662343330353030386336366637316631363438353861643938 Dec 12 17:27:16.254777 containerd[1608]: time="2025-12-12T17:27:16.254729526Z" level=info msg="StartContainer for \"4df6b4305008c66f71f164858ad981cea2beb8d7534a273e7a8b14880ab6e72a\" returns successfully" Dec 12 17:27:16.377693 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:27:16.377835 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:27:16.555445 kubelet[2781]: I1212 17:27:16.555286 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40904382-6d66-45a7-b690-70451381869b-whisker-ca-bundle\") pod \"40904382-6d66-45a7-b690-70451381869b\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " Dec 12 17:27:16.556383 kubelet[2781]: I1212 17:27:16.556106 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wtf8\" (UniqueName: \"kubernetes.io/projected/40904382-6d66-45a7-b690-70451381869b-kube-api-access-8wtf8\") pod \"40904382-6d66-45a7-b690-70451381869b\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " Dec 12 17:27:16.556383 kubelet[2781]: I1212 17:27:16.556150 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40904382-6d66-45a7-b690-70451381869b-whisker-backend-key-pair\") pod \"40904382-6d66-45a7-b690-70451381869b\" (UID: \"40904382-6d66-45a7-b690-70451381869b\") " Dec 12 17:27:16.566186 kubelet[2781]: I1212 17:27:16.566145 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40904382-6d66-45a7-b690-70451381869b-kube-api-access-8wtf8" (OuterVolumeSpecName: "kube-api-access-8wtf8") pod "40904382-6d66-45a7-b690-70451381869b" (UID: "40904382-6d66-45a7-b690-70451381869b"). InnerVolumeSpecName "kube-api-access-8wtf8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:27:16.567995 kubelet[2781]: I1212 17:27:16.567964 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40904382-6d66-45a7-b690-70451381869b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "40904382-6d66-45a7-b690-70451381869b" (UID: "40904382-6d66-45a7-b690-70451381869b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:27:16.574511 kubelet[2781]: I1212 17:27:16.574471 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40904382-6d66-45a7-b690-70451381869b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "40904382-6d66-45a7-b690-70451381869b" (UID: "40904382-6d66-45a7-b690-70451381869b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:27:16.657021 kubelet[2781]: I1212 17:27:16.656969 2781 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40904382-6d66-45a7-b690-70451381869b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:27:16.657021 kubelet[2781]: I1212 17:27:16.657007 2781 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wtf8\" (UniqueName: \"kubernetes.io/projected/40904382-6d66-45a7-b690-70451381869b-kube-api-access-8wtf8\") on node \"localhost\" DevicePath \"\"" Dec 12 17:27:16.657021 kubelet[2781]: I1212 17:27:16.657017 2781 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40904382-6d66-45a7-b690-70451381869b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:27:16.902873 systemd[1]: var-lib-kubelet-pods-40904382\x2d6d66\x2d45a7\x2db690\x2d70451381869b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8wtf8.mount: Deactivated successfully. Dec 12 17:27:16.902983 systemd[1]: var-lib-kubelet-pods-40904382\x2d6d66\x2d45a7\x2db690\x2d70451381869b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:27:17.041584 systemd[1]: Removed slice kubepods-besteffort-pod40904382_6d66_45a7_b690_70451381869b.slice - libcontainer container kubepods-besteffort-pod40904382_6d66_45a7_b690_70451381869b.slice. Dec 12 17:27:17.196155 kubelet[2781]: E1212 17:27:17.195693 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:17.213290 kubelet[2781]: I1212 17:27:17.213192 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kf77f" podStartSLOduration=2.150341303 podStartE2EDuration="12.213177178s" podCreationTimestamp="2025-12-12 17:27:05 +0000 UTC" firstStartedPulling="2025-12-12 17:27:06.002679555 +0000 UTC m=+23.063305125" lastFinishedPulling="2025-12-12 17:27:16.06551543 +0000 UTC m=+33.126141000" observedRunningTime="2025-12-12 17:27:17.212534579 +0000 UTC m=+34.273160189" watchObservedRunningTime="2025-12-12 17:27:17.213177178 +0000 UTC m=+34.273802748" Dec 12 17:27:17.285057 systemd[1]: Created slice kubepods-besteffort-pod6cfb895d_8011_4a5d_b5ce_fee54d2880b7.slice - libcontainer container kubepods-besteffort-pod6cfb895d_8011_4a5d_b5ce_fee54d2880b7.slice. Dec 12 17:27:17.361308 kubelet[2781]: I1212 17:27:17.361268 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cfb895d-8011-4a5d-b5ce-fee54d2880b7-whisker-backend-key-pair\") pod \"whisker-5649bfbdf7-hfvhj\" (UID: \"6cfb895d-8011-4a5d-b5ce-fee54d2880b7\") " pod="calico-system/whisker-5649bfbdf7-hfvhj" Dec 12 17:27:17.361308 kubelet[2781]: I1212 17:27:17.361314 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfb895d-8011-4a5d-b5ce-fee54d2880b7-whisker-ca-bundle\") pod \"whisker-5649bfbdf7-hfvhj\" (UID: \"6cfb895d-8011-4a5d-b5ce-fee54d2880b7\") " pod="calico-system/whisker-5649bfbdf7-hfvhj" Dec 12 17:27:17.361526 kubelet[2781]: I1212 17:27:17.361336 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlw5\" (UniqueName: \"kubernetes.io/projected/6cfb895d-8011-4a5d-b5ce-fee54d2880b7-kube-api-access-djlw5\") pod \"whisker-5649bfbdf7-hfvhj\" (UID: \"6cfb895d-8011-4a5d-b5ce-fee54d2880b7\") " pod="calico-system/whisker-5649bfbdf7-hfvhj" Dec 12 17:27:17.589257 containerd[1608]: time="2025-12-12T17:27:17.589153322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5649bfbdf7-hfvhj,Uid:6cfb895d-8011-4a5d-b5ce-fee54d2880b7,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:17.849568 systemd-networkd[1512]: calid9cb7d89707: Link UP Dec 12 17:27:17.849756 systemd-networkd[1512]: calid9cb7d89707: Gained carrier Dec 12 17:27:17.870315 containerd[1608]: 2025-12-12 17:27:17.612 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:27:17.870315 containerd[1608]: 2025-12-12 17:27:17.659 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0 whisker-5649bfbdf7- calico-system 6cfb895d-8011-4a5d-b5ce-fee54d2880b7 957 0 2025-12-12 17:27:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5649bfbdf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5649bfbdf7-hfvhj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid9cb7d89707 [] [] }} ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-" Dec 12 17:27:17.870315 containerd[1608]: 2025-12-12 17:27:17.659 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870315 containerd[1608]: 2025-12-12 17:27:17.781 [INFO][4019] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" HandleID="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Workload="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.781 [INFO][4019] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" HandleID="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Workload="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001357d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5649bfbdf7-hfvhj", "timestamp":"2025-12-12 17:27:17.781230822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.781 [INFO][4019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.781 [INFO][4019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.781 [INFO][4019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.803 [INFO][4019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" host="localhost" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.811 [INFO][4019] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.816 [INFO][4019] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.821 [INFO][4019] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.823 [INFO][4019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:17.870563 containerd[1608]: 2025-12-12 17:27:17.823 [INFO][4019] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" host="localhost" Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.826 [INFO][4019] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4 Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.830 [INFO][4019] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" host="localhost" Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.837 [INFO][4019] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" host="localhost" Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.837 [INFO][4019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" host="localhost" Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.837 [INFO][4019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:17.870762 containerd[1608]: 2025-12-12 17:27:17.837 [INFO][4019] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" HandleID="k8s-pod-network.03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Workload="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870876 containerd[1608]: 2025-12-12 17:27:17.840 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0", GenerateName:"whisker-5649bfbdf7-", Namespace:"calico-system", SelfLink:"", UID:"6cfb895d-8011-4a5d-b5ce-fee54d2880b7", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5649bfbdf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5649bfbdf7-hfvhj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid9cb7d89707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:17.870876 containerd[1608]: 2025-12-12 17:27:17.841 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870943 containerd[1608]: 2025-12-12 17:27:17.841 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9cb7d89707 ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870943 containerd[1608]: 2025-12-12 17:27:17.847 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.870981 containerd[1608]: 2025-12-12 17:27:17.853 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0", GenerateName:"whisker-5649bfbdf7-", Namespace:"calico-system", SelfLink:"", UID:"6cfb895d-8011-4a5d-b5ce-fee54d2880b7", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5649bfbdf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4", Pod:"whisker-5649bfbdf7-hfvhj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid9cb7d89707", MAC:"9e:4d:82:6c:92:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:17.871025 containerd[1608]: 2025-12-12 17:27:17.863 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" Namespace="calico-system" Pod="whisker-5649bfbdf7-hfvhj" WorkloadEndpoint="localhost-k8s-whisker--5649bfbdf7--hfvhj-eth0" Dec 12 17:27:17.886000 audit: BPF prog-id=173 op=LOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8b1e9f8 a2=98 a3=ffffd8b1e9e8 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.886000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd8b1e9c8 a3=0 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.886000 audit: BPF prog-id=174 op=LOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8b1e8a8 a2=74 a3=95 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.886000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.886000 audit: BPF prog-id=175 op=LOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8b1e8d8 a2=40 a3=ffffd8b1e908 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.886000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:27:17.886000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd8b1e908 items=0 ppid=4015 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.886000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:27:17.889000 audit: BPF prog-id=176 op=LOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3852008 a2=98 a3=ffffe3851ff8 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.889000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3851fd8 a3=0 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.889000 audit: BPF prog-id=177 op=LOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3851c98 a2=74 a3=95 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.889000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.889000 audit: BPF prog-id=178 op=LOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3851cf8 a2=94 a3=2 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.889000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:27:17.889000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.969868 containerd[1608]: time="2025-12-12T17:27:17.969816311Z" level=info msg="connecting to shim 03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4" address="unix:///run/containerd/s/71453f64d9b06202ac52afac0f40874e87ca1cbd80b331cabe17d7e6a2bb07ce" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:17.997000 audit: BPF prog-id=179 op=LOAD Dec 12 17:27:17.997000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3851cb8 a2=40 a3=ffffe3851ce8 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.997000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:17.997000 audit: BPF prog-id=179 op=UNLOAD Dec 12 17:27:17.997000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe3851ce8 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:17.997000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.001354 systemd[1]: Started cri-containerd-03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4.scope - libcontainer container 03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4. Dec 12 17:27:18.009000 audit: BPF prog-id=180 op=LOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3851cc8 a2=94 a3=4 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=181 op=LOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3851b08 a2=94 a3=5 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=182 op=LOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3851d38 a2=94 a3=6 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.009000 audit: BPF prog-id=183 op=LOAD Dec 12 17:27:18.009000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3851508 a2=94 a3=83 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.009000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.010000 audit: BPF prog-id=184 op=LOAD Dec 12 17:27:18.010000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe38512c8 a2=94 a3=2 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.010000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.010000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:27:18.010000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.010000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.010000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:27:18.010000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3203e620 a3=32031b00 items=0 ppid=4015 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.010000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:27:18.013000 audit: BPF prog-id=185 op=LOAD Dec 12 17:27:18.014000 audit: BPF prog-id=186 op=LOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=187 op=LOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=188 op=LOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.014000 audit: BPF prog-id=189 op=LOAD Dec 12 17:27:18.014000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4123 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363031653237653535316234613165643330326633306437386238 Dec 12 17:27:18.016647 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:18.021000 audit: BPF prog-id=190 op=LOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecf9c8b8 a2=98 a3=ffffecf9c8a8 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.021000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffecf9c888 a3=0 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.021000 audit: BPF prog-id=191 op=LOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecf9c768 a2=74 a3=95 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.021000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.021000 audit: BPF prog-id=192 op=LOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecf9c798 a2=40 a3=ffffecf9c7c8 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.021000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:27:18.021000 audit[4156]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffecf9c7c8 items=0 ppid=4015 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:27:18.072456 containerd[1608]: time="2025-12-12T17:27:18.072333589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5649bfbdf7-hfvhj,Uid:6cfb895d-8011-4a5d-b5ce-fee54d2880b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"03601e27e551b4a1ed302f30d78b849500c3cf25ecf63374600408c4bb5bfdd4\"" Dec 12 17:27:18.081082 containerd[1608]: time="2025-12-12T17:27:18.081039060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:18.092166 systemd-networkd[1512]: vxlan.calico: Link UP Dec 12 17:27:18.092172 systemd-networkd[1512]: vxlan.calico: Gained carrier Dec 12 17:27:18.108000 audit: BPF prog-id=193 op=LOAD Dec 12 17:27:18.108000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7a6118 a2=98 a3=ffffec7a6108 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.108000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.108000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:27:18.108000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffec7a60e8 a3=0 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.108000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.108000 audit: BPF prog-id=194 op=LOAD Dec 12 17:27:18.108000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7a5df8 a2=74 a3=95 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.108000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.110000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:27:18.110000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.110000 audit: BPF prog-id=195 op=LOAD Dec 12 17:27:18.110000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7a5e58 a2=94 a3=2 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.110000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:27:18.110000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=196 op=LOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7a5cd8 a2=40 a3=ffffec7a5d08 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffec7a5d08 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=197 op=LOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7a5e28 a2=94 a3=b7 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=198 op=LOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7a54d8 a2=94 a3=2 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.111000 audit: BPF prog-id=199 op=LOAD Dec 12 17:27:18.111000 audit[4186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7a5668 a2=94 a3=30 items=0 ppid=4015 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:27:18.114000 audit: BPF prog-id=200 op=LOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfdb1b08 a2=98 a3=ffffcfdb1af8 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.114000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcfdb1ad8 a3=0 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.114000 audit: BPF prog-id=201 op=LOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfdb1798 a2=74 a3=95 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.114000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.114000 audit: BPF prog-id=202 op=LOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfdb17f8 a2=94 a3=2 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.114000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:27:18.114000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.199997 kubelet[2781]: I1212 17:27:18.199898 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:27:18.200970 kubelet[2781]: E1212 17:27:18.200889 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:18.218000 audit: BPF prog-id=203 op=LOAD Dec 12 17:27:18.218000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfdb17b8 a2=40 a3=ffffcfdb17e8 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.218000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.218000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:27:18.218000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcfdb17e8 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.218000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=204 op=LOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcfdb17c8 a2=94 a3=4 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=205 op=LOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcfdb1608 a2=94 a3=5 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=206 op=LOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcfdb1838 a2=94 a3=6 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.228000 audit: BPF prog-id=207 op=LOAD Dec 12 17:27:18.228000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcfdb1008 a2=94 a3=83 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.229000 audit: BPF prog-id=208 op=LOAD Dec 12 17:27:18.229000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcfdb0dc8 a2=94 a3=2 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.229000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:27:18.229000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.229000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:27:18.229000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=b466620 a3=b459b00 items=0 ppid=4015 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:27:18.246000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:27:18.246000 audit[4015]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000dde240 a2=0 a3=0 items=0 ppid=3967 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.246000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:27:18.291000 audit[4216]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:18.291000 audit[4216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffffbc5570 a2=0 a3=ffffade34fa8 items=0 ppid=4015 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:18.291000 audit[4218]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:18.291000 audit[4218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffdee9a40 a2=0 a3=ffff93c3cfa8 items=0 ppid=4015 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:18.294519 containerd[1608]: time="2025-12-12T17:27:18.294478783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:18.296045 containerd[1608]: time="2025-12-12T17:27:18.295923748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:18.296045 containerd[1608]: time="2025-12-12T17:27:18.295991072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:18.296238 kubelet[2781]: E1212 17:27:18.296197 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:18.299227 kubelet[2781]: E1212 17:27:18.299170 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:18.298000 audit[4217]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:18.298000 audit[4217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff2938330 a2=0 a3=ffffa8ebffa8 items=0 ppid=4015 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.298000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:18.312970 kubelet[2781]: E1212 17:27:18.312903 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116dbdb0e33466c944d333c658c0107,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:18.314962 containerd[1608]: time="2025-12-12T17:27:18.314926263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:18.302000 audit[4220]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4220 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:18.302000 audit[4220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffdf6f1800 a2=0 a3=ffffae5dafa8 items=0 ppid=4015 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:18.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:18.516997 containerd[1608]: time="2025-12-12T17:27:18.516882872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:18.518032 containerd[1608]: time="2025-12-12T17:27:18.517952575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:18.518032 containerd[1608]: time="2025-12-12T17:27:18.517974656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:18.518328 kubelet[2781]: E1212 17:27:18.518265 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:18.518328 kubelet[2781]: E1212 17:27:18.518313 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:18.518849 kubelet[2781]: E1212 17:27:18.518769 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:18.520019 kubelet[2781]: E1212 17:27:18.519984 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7" Dec 12 17:27:19.032558 kubelet[2781]: I1212 17:27:19.032171 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40904382-6d66-45a7-b690-70451381869b" path="/var/lib/kubelet/pods/40904382-6d66-45a7-b690-70451381869b/volumes" Dec 12 17:27:19.065234 systemd-networkd[1512]: calid9cb7d89707: Gained IPv6LL Dec 12 17:27:19.200436 kubelet[2781]: E1212 17:27:19.200357 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:19.238173 kubelet[2781]: E1212 17:27:19.238112 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7" Dec 12 17:27:19.242000 audit[4299]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:19.242000 audit[4299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffda342ec0 a2=0 a3=1 items=0 ppid=2901 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:19.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:19.250000 audit[4299]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:19.250000 audit[4299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffda342ec0 a2=0 a3=1 items=0 ppid=2901 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:19.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:19.323889 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:39424.service - OpenSSH per-connection server daemon (10.0.0.1:39424). Dec 12 17:27:19.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.57:22-10.0.0.1:39424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:19.404000 audit[4307]: USER_ACCT pid=4307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.405831 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 39424 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:19.405000 audit[4307]: CRED_ACQ pid=4307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.405000 audit[4307]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecb98ca0 a2=3 a3=0 items=0 ppid=1 pid=4307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:19.405000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:19.407912 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:19.412880 systemd-logind[1592]: New session 8 of user core. Dec 12 17:27:19.419425 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:27:19.420000 audit[4307]: USER_START pid=4307 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.421000 audit[4310]: CRED_ACQ pid=4310 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.575868 sshd[4310]: Connection closed by 10.0.0.1 port 39424 Dec 12 17:27:19.576169 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:19.577440 systemd-networkd[1512]: vxlan.calico: Gained IPv6LL Dec 12 17:27:19.576000 audit[4307]: USER_END pid=4307 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.576000 audit[4307]: CRED_DISP pid=4307 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:19.580930 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:27:19.581185 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:39424.service: Deactivated successfully. Dec 12 17:27:19.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.57:22-10.0.0.1:39424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:19.583310 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:27:19.584854 systemd-logind[1592]: Removed session 8. Dec 12 17:27:24.031885 containerd[1608]: time="2025-12-12T17:27:24.031163847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nnrnj,Uid:15a11d37-fdc8-4b22-a7c5-9c4c4246dd24,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:24.031885 containerd[1608]: time="2025-12-12T17:27:24.031403819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffhww,Uid:5fefc895-2faa-4b6c-b800-5fdfceed3426,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:24.178266 systemd-networkd[1512]: caliaadc0c430dc: Link UP Dec 12 17:27:24.178939 systemd-networkd[1512]: caliaadc0c430dc: Gained carrier Dec 12 17:27:24.194636 containerd[1608]: 2025-12-12 17:27:24.112 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nnrnj-eth0 goldmane-666569f655- calico-system 15a11d37-fdc8-4b22-a7c5-9c4c4246dd24 892 0 2025-12-12 17:27:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nnrnj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaadc0c430dc [] [] }} ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-" Dec 12 17:27:24.194636 containerd[1608]: 2025-12-12 17:27:24.112 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.194636 containerd[1608]: 2025-12-12 17:27:24.138 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" HandleID="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Workload="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.138 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" HandleID="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Workload="localhost-k8s-goldmane--666569f655--nnrnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005029b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nnrnj", "timestamp":"2025-12-12 17:27:24.138404529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.138 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.138 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.139 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.150 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" host="localhost" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.155 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.159 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.161 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.163 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:24.194845 containerd[1608]: 2025-12-12 17:27:24.163 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" host="localhost" Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.165 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25 Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.168 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" host="localhost" Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.173 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" host="localhost" Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.173 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" host="localhost" Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.174 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:24.195059 containerd[1608]: 2025-12-12 17:27:24.174 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" HandleID="k8s-pod-network.7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Workload="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.195572 containerd[1608]: 2025-12-12 17:27:24.176 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nnrnj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nnrnj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaadc0c430dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:24.195572 containerd[1608]: 2025-12-12 17:27:24.176 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.195650 containerd[1608]: 2025-12-12 17:27:24.176 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaadc0c430dc ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.195650 containerd[1608]: 2025-12-12 17:27:24.179 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.195692 containerd[1608]: 2025-12-12 17:27:24.179 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nnrnj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"15a11d37-fdc8-4b22-a7c5-9c4c4246dd24", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25", Pod:"goldmane-666569f655-nnrnj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaadc0c430dc", MAC:"92:67:de:9c:13:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:24.195738 containerd[1608]: 2025-12-12 17:27:24.192 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" Namespace="calico-system" Pod="goldmane-666569f655-nnrnj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nnrnj-eth0" Dec 12 17:27:24.206000 audit[4393]: NETFILTER_CFG table=filter:129 family=2 entries=44 op=nft_register_chain pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:24.209148 kernel: kauditd_printk_skb: 242 callbacks suppressed Dec 12 17:27:24.209235 kernel: audit: type=1325 audit(1765560444.206:655): table=filter:129 family=2 entries=44 op=nft_register_chain pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:24.206000 audit[4393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffe8ffc2b0 a2=0 a3=ffffb056bfa8 items=0 ppid=4015 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.215361 kernel: audit: type=1300 audit(1765560444.206:655): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffe8ffc2b0 a2=0 a3=ffffb056bfa8 items=0 ppid=4015 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.215481 kernel: audit: type=1327 audit(1765560444.206:655): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:24.206000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:24.224807 containerd[1608]: time="2025-12-12T17:27:24.224743310Z" level=info msg="connecting to shim 7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25" address="unix:///run/containerd/s/9179aaa802a268f0e92188c36b355bf2dd31fc9f8da6113e1b4422c8782a1d9c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:24.249317 systemd[1]: Started cri-containerd-7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25.scope - libcontainer container 7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25. Dec 12 17:27:24.259000 audit: BPF prog-id=209 op=LOAD Dec 12 17:27:24.261000 audit: BPF prog-id=210 op=LOAD Dec 12 17:27:24.263418 kernel: audit: type=1334 audit(1765560444.259:656): prog-id=209 op=LOAD Dec 12 17:27:24.263450 kernel: audit: type=1334 audit(1765560444.261:657): prog-id=210 op=LOAD Dec 12 17:27:24.261000 audit[4412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.267147 kernel: audit: type=1300 audit(1765560444.261:657): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.271456 kernel: audit: type=1327 audit(1765560444.261:657): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.271573 kernel: audit: type=1334 audit(1765560444.262:658): prog-id=210 op=UNLOAD Dec 12 17:27:24.262000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:27:24.273255 kernel: audit: type=1300 audit(1765560444.262:658): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.262000 audit[4412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.274001 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:24.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.279693 kernel: audit: type=1327 audit(1765560444.262:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.262000 audit: BPF prog-id=211 op=LOAD Dec 12 17:27:24.262000 audit[4412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.266000 audit: BPF prog-id=212 op=LOAD Dec 12 17:27:24.266000 audit[4412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.266000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:27:24.266000 audit[4412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.266000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:27:24.266000 audit[4412]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.266000 audit: BPF prog-id=213 op=LOAD Dec 12 17:27:24.266000 audit[4412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4401 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761336464663066626132346438626137616333363562336339313665 Dec 12 17:27:24.294889 systemd-networkd[1512]: calia908e5c703a: Link UP Dec 12 17:27:24.298102 systemd-networkd[1512]: calia908e5c703a: Gained carrier Dec 12 17:27:24.311518 containerd[1608]: 2025-12-12 17:27:24.112 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ffhww-eth0 csi-node-driver- calico-system 5fefc895-2faa-4b6c-b800-5fdfceed3426 788 0 2025-12-12 17:27:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ffhww eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia908e5c703a [] [] }} ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-" Dec 12 17:27:24.311518 containerd[1608]: 2025-12-12 17:27:24.112 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.311518 containerd[1608]: 2025-12-12 17:27:24.145 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" HandleID="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Workload="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.145 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" HandleID="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Workload="localhost-k8s-csi--node--driver--ffhww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051dee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ffhww", "timestamp":"2025-12-12 17:27:24.145537278 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.145 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.174 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.174 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.251 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" host="localhost" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.257 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.262 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.264 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.272 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:24.311697 containerd[1608]: 2025-12-12 17:27:24.273 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" host="localhost" Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.275 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4 Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.280 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" host="localhost" Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.290 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" host="localhost" Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.291 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" host="localhost" Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.291 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:24.312487 containerd[1608]: 2025-12-12 17:27:24.291 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" HandleID="k8s-pod-network.d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Workload="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.312608 containerd[1608]: 2025-12-12 17:27:24.293 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffhww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fefc895-2faa-4b6c-b800-5fdfceed3426", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ffhww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia908e5c703a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:24.312668 containerd[1608]: 2025-12-12 17:27:24.293 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.312668 containerd[1608]: 2025-12-12 17:27:24.293 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia908e5c703a ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.312668 containerd[1608]: 2025-12-12 17:27:24.297 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.312725 containerd[1608]: 2025-12-12 17:27:24.297 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffhww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fefc895-2faa-4b6c-b800-5fdfceed3426", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4", Pod:"csi-node-driver-ffhww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia908e5c703a", MAC:"4a:0a:21:33:95:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:24.312772 containerd[1608]: 2025-12-12 17:27:24.308 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" Namespace="calico-system" Pod="csi-node-driver-ffhww" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffhww-eth0" Dec 12 17:27:24.321000 audit[4450]: NETFILTER_CFG table=filter:130 family=2 entries=46 op=nft_register_chain pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:24.321000 audit[4450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23616 a0=3 a1=fffffd362340 a2=0 a3=ffff885fafa8 items=0 ppid=4015 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.321000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:24.332230 containerd[1608]: time="2025-12-12T17:27:24.332193723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nnrnj,Uid:15a11d37-fdc8-4b22-a7c5-9c4c4246dd24,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a3ddf0fba24d8ba7ac365b3c916e6cf3529322d6d6cf673e7c7ad23633f1a25\"" Dec 12 17:27:24.335655 containerd[1608]: time="2025-12-12T17:27:24.335537286Z" level=info msg="connecting to shim d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4" address="unix:///run/containerd/s/0c8119d3c3cb15dbbe271b9644b0cb251c3bd32d886afcd966c04cfac763c4d3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:24.342434 containerd[1608]: time="2025-12-12T17:27:24.342248734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:24.361334 systemd[1]: Started cri-containerd-d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4.scope - libcontainer container d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4. Dec 12 17:27:24.369000 audit: BPF prog-id=214 op=LOAD Dec 12 17:27:24.370000 audit: BPF prog-id=215 op=LOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=216 op=LOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=217 op=LOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.370000 audit: BPF prog-id=218 op=LOAD Dec 12 17:27:24.370000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4459 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393261303731373961616539636565353935333238353238353261 Dec 12 17:27:24.372651 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:24.386180 containerd[1608]: time="2025-12-12T17:27:24.386145200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffhww,Uid:5fefc895-2faa-4b6c-b800-5fdfceed3426,Namespace:calico-system,Attempt:0,} returns sandbox id \"d092a07179aae9cee59532852852a27d158f75de378f0ed374734099e36e65e4\"" Dec 12 17:27:24.556964 containerd[1608]: time="2025-12-12T17:27:24.556281917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:24.558187 containerd[1608]: time="2025-12-12T17:27:24.558103446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:24.558411 kubelet[2781]: E1212 17:27:24.558364 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:24.558711 kubelet[2781]: E1212 17:27:24.558413 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:24.558745 containerd[1608]: time="2025-12-12T17:27:24.558149248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:24.558792 kubelet[2781]: E1212 17:27:24.558691 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwpkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nnrnj_calico-system(15a11d37-fdc8-4b22-a7c5-9c4c4246dd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:24.558897 containerd[1608]: time="2025-12-12T17:27:24.558773239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:24.559985 kubelet[2781]: E1212 17:27:24.559901 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:24.588567 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:50378.service - OpenSSH per-connection server daemon (10.0.0.1:50378). Dec 12 17:27:24.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.57:22-10.0.0.1:50378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:24.636000 audit[4497]: USER_ACCT pid=4497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.637703 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 50378 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:24.637000 audit[4497]: CRED_ACQ pid=4497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.637000 audit[4497]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4430050 a2=3 a3=0 items=0 ppid=1 pid=4497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:24.637000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:24.638963 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:24.643303 systemd-logind[1592]: New session 9 of user core. Dec 12 17:27:24.654317 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:27:24.654000 audit[4497]: USER_START pid=4497 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.656000 audit[4500]: CRED_ACQ pid=4500 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.730376 containerd[1608]: time="2025-12-12T17:27:24.730289183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:24.732616 containerd[1608]: time="2025-12-12T17:27:24.732565854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:24.732616 containerd[1608]: time="2025-12-12T17:27:24.732570294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:24.732870 kubelet[2781]: E1212 17:27:24.732809 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:24.732917 kubelet[2781]: E1212 17:27:24.732870 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:24.733046 kubelet[2781]: E1212 17:27:24.733008 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:24.735014 containerd[1608]: time="2025-12-12T17:27:24.734977292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:24.778853 sshd[4500]: Connection closed by 10.0.0.1 port 50378 Dec 12 17:27:24.779327 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:24.779000 audit[4497]: USER_END pid=4497 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.779000 audit[4497]: CRED_DISP pid=4497 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:24.783320 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:50378.service: Deactivated successfully. Dec 12 17:27:24.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.57:22-10.0.0.1:50378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:24.785510 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:27:24.786378 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:27:24.787591 systemd-logind[1592]: Removed session 9. Dec 12 17:27:24.951586 containerd[1608]: time="2025-12-12T17:27:24.951528398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:24.952619 containerd[1608]: time="2025-12-12T17:27:24.952565809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:24.952723 containerd[1608]: time="2025-12-12T17:27:24.952665934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:24.952950 kubelet[2781]: E1212 17:27:24.952912 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:24.953054 kubelet[2781]: E1212 17:27:24.953037 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:24.953289 kubelet[2781]: E1212 17:27:24.953249 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:24.954685 kubelet[2781]: E1212 17:27:24.954631 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:25.034348 containerd[1608]: time="2025-12-12T17:27:25.032530157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795ddb8d7d-wsdsj,Uid:72a9762c-3e28-4065-81d7-b33bc02428ff,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:25.035265 kubelet[2781]: E1212 17:27:25.034924 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:25.036108 containerd[1608]: time="2025-12-12T17:27:25.036073606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9kkcr,Uid:66cfa93c-689b-443a-b94e-5457d1ad65b4,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:25.172383 systemd-networkd[1512]: califae80406014: Link UP Dec 12 17:27:25.172529 systemd-networkd[1512]: califae80406014: Gained carrier Dec 12 17:27:25.193683 containerd[1608]: 2025-12-12 17:27:25.095 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0 coredns-674b8bbfcf- kube-system 66cfa93c-689b-443a-b94e-5457d1ad65b4 882 0 2025-12-12 17:26:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9kkcr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califae80406014 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-" Dec 12 17:27:25.193683 containerd[1608]: 2025-12-12 17:27:25.095 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.193683 containerd[1608]: 2025-12-12 17:27:25.128 [INFO][4542] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" HandleID="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Workload="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.128 [INFO][4542] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" HandleID="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Workload="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000504240), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9kkcr", "timestamp":"2025-12-12 17:27:25.128538126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.128 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.128 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.128 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.139 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" host="localhost" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.145 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.150 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.153 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.155 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:25.193925 containerd[1608]: 2025-12-12 17:27:25.155 [INFO][4542] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" host="localhost" Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.157 [INFO][4542] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884 Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.162 [INFO][4542] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" host="localhost" Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4542] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" host="localhost" Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" host="localhost" Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:25.194196 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4542] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" HandleID="k8s-pod-network.7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Workload="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.194315 containerd[1608]: 2025-12-12 17:27:25.170 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66cfa93c-689b-443a-b94e-5457d1ad65b4", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9kkcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califae80406014", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:25.194368 containerd[1608]: 2025-12-12 17:27:25.170 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.194368 containerd[1608]: 2025-12-12 17:27:25.170 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califae80406014 ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.194368 containerd[1608]: 2025-12-12 17:27:25.172 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.194429 containerd[1608]: 2025-12-12 17:27:25.172 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66cfa93c-689b-443a-b94e-5457d1ad65b4", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884", Pod:"coredns-674b8bbfcf-9kkcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califae80406014", MAC:"3a:9a:ba:27:77:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:25.194429 containerd[1608]: 2025-12-12 17:27:25.191 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" Namespace="kube-system" Pod="coredns-674b8bbfcf-9kkcr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9kkcr-eth0" Dec 12 17:27:25.203000 audit[4569]: NETFILTER_CFG table=filter:131 family=2 entries=46 op=nft_register_chain pid=4569 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:25.203000 audit[4569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23724 a0=3 a1=ffffe3b7b780 a2=0 a3=ffff9a2cefa8 items=0 ppid=4015 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:25.220013 containerd[1608]: time="2025-12-12T17:27:25.219914955Z" level=info msg="connecting to shim 7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884" address="unix:///run/containerd/s/074eb93eca7050994dae54c9a620e4ca9e4bd4092303f55ea1bdb5da9814c63b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:25.223665 kubelet[2781]: E1212 17:27:25.223578 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:25.230897 kubelet[2781]: E1212 17:27:25.230801 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:25.256455 systemd[1]: Started cri-containerd-7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884.scope - libcontainer container 7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884. Dec 12 17:27:25.270000 audit: BPF prog-id=219 op=LOAD Dec 12 17:27:25.270000 audit: BPF prog-id=220 op=LOAD Dec 12 17:27:25.270000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.270000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:27:25.270000 audit[4591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.270000 audit: BPF prog-id=221 op=LOAD Dec 12 17:27:25.270000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.270000 audit[4610]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:25.270000 audit[4610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdf3d9620 a2=0 a3=1 items=0 ppid=2901 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:25.272000 audit: BPF prog-id=222 op=LOAD Dec 12 17:27:25.272000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.272000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:27:25.272000 audit[4591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.272000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:27:25.272000 audit[4591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.272000 audit: BPF prog-id=223 op=LOAD Dec 12 17:27:25.272000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4578 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353361306233636434343537323636353838386638396232393531 Dec 12 17:27:25.274934 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:25.278000 audit[4610]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:25.278000 audit[4610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdf3d9620 a2=0 a3=1 items=0 ppid=2901 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:25.293693 systemd-networkd[1512]: calia117bd027d9: Link UP Dec 12 17:27:25.293839 systemd-networkd[1512]: calia117bd027d9: Gained carrier Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.096 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0 calico-kube-controllers-795ddb8d7d- calico-system 72a9762c-3e28-4065-81d7-b33bc02428ff 889 0 2025-12-12 17:27:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:795ddb8d7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-795ddb8d7d-wsdsj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia117bd027d9 [] [] }} ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.096 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.135 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" HandleID="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Workload="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.135 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" HandleID="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Workload="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-795ddb8d7d-wsdsj", "timestamp":"2025-12-12 17:27:25.135685546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.135 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.168 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.243 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.258 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.265 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.268 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.271 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.272 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.273 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44 Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.278 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.286 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.286 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" host="localhost" Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.286 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:25.311967 containerd[1608]: 2025-12-12 17:27:25.286 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" HandleID="k8s-pod-network.f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Workload="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.289 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0", GenerateName:"calico-kube-controllers-795ddb8d7d-", Namespace:"calico-system", SelfLink:"", UID:"72a9762c-3e28-4065-81d7-b33bc02428ff", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795ddb8d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-795ddb8d7d-wsdsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia117bd027d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.289 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.289 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia117bd027d9 ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.296 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.297 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0", GenerateName:"calico-kube-controllers-795ddb8d7d-", Namespace:"calico-system", SelfLink:"", UID:"72a9762c-3e28-4065-81d7-b33bc02428ff", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 27, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795ddb8d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44", Pod:"calico-kube-controllers-795ddb8d7d-wsdsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia117bd027d9", MAC:"ae:62:b2:d0:ec:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:25.312535 containerd[1608]: 2025-12-12 17:27:25.309 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" Namespace="calico-system" Pod="calico-kube-controllers-795ddb8d7d-wsdsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795ddb8d7d--wsdsj-eth0" Dec 12 17:27:25.318990 containerd[1608]: time="2025-12-12T17:27:25.318954948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9kkcr,Uid:66cfa93c-689b-443a-b94e-5457d1ad65b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884\"" Dec 12 17:27:25.319830 kubelet[2781]: E1212 17:27:25.319801 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:25.323317 containerd[1608]: time="2025-12-12T17:27:25.323269433Z" level=info msg="CreateContainer within sandbox \"7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:25.327000 audit[4625]: NETFILTER_CFG table=filter:134 family=2 entries=40 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:25.327000 audit[4625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20784 a0=3 a1=ffffe4598b60 a2=0 a3=ffff9e78afa8 items=0 ppid=4015 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.327000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:25.340177 containerd[1608]: time="2025-12-12T17:27:25.339656693Z" level=info msg="Container bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:25.342578 containerd[1608]: time="2025-12-12T17:27:25.342542871Z" level=info msg="connecting to shim f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44" address="unix:///run/containerd/s/7c9bba253d34a60b5928fa1cdd5307cdc23ba7df1aaa195381b38f667aac682c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:25.347451 containerd[1608]: time="2025-12-12T17:27:25.347414663Z" level=info msg="CreateContainer within sandbox \"7a53a0b3cd44572665888f89b29511c40a5e13152c2899307b17bfb685724884\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d\"" Dec 12 17:27:25.348732 containerd[1608]: time="2025-12-12T17:27:25.348696724Z" level=info msg="StartContainer for \"bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d\"" Dec 12 17:27:25.349752 containerd[1608]: time="2025-12-12T17:27:25.349701291Z" level=info msg="connecting to shim bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d" address="unix:///run/containerd/s/074eb93eca7050994dae54c9a620e4ca9e4bd4092303f55ea1bdb5da9814c63b" protocol=ttrpc version=3 Dec 12 17:27:25.369327 systemd[1]: Started cri-containerd-bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d.scope - libcontainer container bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d. Dec 12 17:27:25.370462 systemd[1]: Started cri-containerd-f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44.scope - libcontainer container f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44. Dec 12 17:27:25.380000 audit: BPF prog-id=224 op=LOAD Dec 12 17:27:25.380000 audit: BPF prog-id=225 op=LOAD Dec 12 17:27:25.380000 audit[4646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.380000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:27:25.380000 audit[4646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.380000 audit: BPF prog-id=226 op=LOAD Dec 12 17:27:25.380000 audit[4646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.380000 audit: BPF prog-id=227 op=LOAD Dec 12 17:27:25.380000 audit[4646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.380000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:27:25.380000 audit[4646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.381000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:27:25.381000 audit[4646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.381000 audit: BPF prog-id=228 op=LOAD Dec 12 17:27:25.381000 audit[4646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4578 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266633432386665653235656663633333613034616335623637323035 Dec 12 17:27:25.383000 audit: BPF prog-id=229 op=LOAD Dec 12 17:27:25.384000 audit: BPF prog-id=230 op=LOAD Dec 12 17:27:25.384000 audit[4645]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.384000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:27:25.384000 audit[4645]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.384000 audit: BPF prog-id=231 op=LOAD Dec 12 17:27:25.384000 audit[4645]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.385000 audit: BPF prog-id=232 op=LOAD Dec 12 17:27:25.385000 audit[4645]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.385000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:27:25.385000 audit[4645]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.385000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:27:25.385000 audit[4645]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.385000 audit: BPF prog-id=233 op=LOAD Dec 12 17:27:25.385000 audit[4645]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4635 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:25.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635656261396434663563363565303038333763653739393235656336 Dec 12 17:27:25.387688 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:25.405307 containerd[1608]: time="2025-12-12T17:27:25.405150290Z" level=info msg="StartContainer for \"bfc428fee25efcc33a04ac5b67205da8fa985140381130448a0d3015c415a23d\" returns successfully" Dec 12 17:27:25.420659 containerd[1608]: time="2025-12-12T17:27:25.420605386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795ddb8d7d-wsdsj,Uid:72a9762c-3e28-4065-81d7-b33bc02428ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5eba9d4f5c65e00837ce79925ec62e322497094a65a3949a8108debce9b8e44\"" Dec 12 17:27:25.422551 containerd[1608]: time="2025-12-12T17:27:25.422525277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:25.605797 containerd[1608]: time="2025-12-12T17:27:25.605662633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:25.611795 containerd[1608]: time="2025-12-12T17:27:25.611739362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:25.612005 containerd[1608]: time="2025-12-12T17:27:25.611834486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:25.612391 kubelet[2781]: E1212 17:27:25.612058 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:25.612391 kubelet[2781]: E1212 17:27:25.612104 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:25.612391 kubelet[2781]: E1212 17:27:25.612259 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7htq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-795ddb8d7d-wsdsj_calico-system(72a9762c-3e28-4065-81d7-b33bc02428ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:25.613379 kubelet[2781]: E1212 17:27:25.613352 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:25.849438 systemd-networkd[1512]: caliaadc0c430dc: Gained IPv6LL Dec 12 17:27:25.976282 systemd-networkd[1512]: calia908e5c703a: Gained IPv6LL Dec 12 17:27:26.031314 kubelet[2781]: E1212 17:27:26.029552 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:26.031452 containerd[1608]: time="2025-12-12T17:27:26.030059595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpxb4,Uid:8244e43b-886c-44bc-ad6f-47d7edb4df86,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:26.162395 systemd-networkd[1512]: cali67fe3f32e72: Link UP Dec 12 17:27:26.162961 systemd-networkd[1512]: cali67fe3f32e72: Gained carrier Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.087 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0 coredns-674b8bbfcf- kube-system 8244e43b-886c-44bc-ad6f-47d7edb4df86 888 0 2025-12-12 17:26:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cpxb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67fe3f32e72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.087 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.117 [INFO][4723] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" HandleID="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Workload="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.117 [INFO][4723] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" HandleID="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Workload="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cpxb4", "timestamp":"2025-12-12 17:27:26.117488329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.117 [INFO][4723] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.117 [INFO][4723] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.117 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.130 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.134 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.140 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.142 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.144 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.144 [INFO][4723] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.146 [INFO][4723] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15 Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.150 [INFO][4723] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.157 [INFO][4723] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.157 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" host="localhost" Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.157 [INFO][4723] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:26.177504 containerd[1608]: 2025-12-12 17:27:26.157 [INFO][4723] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" HandleID="k8s-pod-network.acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Workload="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.160 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8244e43b-886c-44bc-ad6f-47d7edb4df86", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cpxb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67fe3f32e72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.160 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.160 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67fe3f32e72 ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.163 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.163 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8244e43b-886c-44bc-ad6f-47d7edb4df86", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15", Pod:"coredns-674b8bbfcf-cpxb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67fe3f32e72", MAC:"16:ff:dd:27:13:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:26.178585 containerd[1608]: 2025-12-12 17:27:26.174 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpxb4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cpxb4-eth0" Dec 12 17:27:26.189000 audit[4741]: NETFILTER_CFG table=filter:135 family=2 entries=40 op=nft_register_chain pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:26.189000 audit[4741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20328 a0=3 a1=ffffcdb4f9f0 a2=0 a3=ffff83d26fa8 items=0 ppid=4015 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:26.200707 containerd[1608]: time="2025-12-12T17:27:26.200649906Z" level=info msg="connecting to shim acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15" address="unix:///run/containerd/s/e72ecdb8e04b949b78e31b855924d2c264148730fe36dd2b2737c7a8580e5129" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:26.233259 kubelet[2781]: E1212 17:27:26.232842 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:26.233963 systemd[1]: Started cri-containerd-acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15.scope - libcontainer container acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15. Dec 12 17:27:26.236711 kubelet[2781]: E1212 17:27:26.236615 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:26.238149 kubelet[2781]: E1212 17:27:26.237860 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:26.238610 kubelet[2781]: E1212 17:27:26.238558 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:26.280001 kubelet[2781]: I1212 17:27:26.279909 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9kkcr" podStartSLOduration=37.279891180999996 podStartE2EDuration="37.279891181s" podCreationTimestamp="2025-12-12 17:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:26.265531635 +0000 UTC m=+43.326157245" watchObservedRunningTime="2025-12-12 17:27:26.279891181 +0000 UTC m=+43.340516751" Dec 12 17:27:26.281000 audit: BPF prog-id=234 op=LOAD Dec 12 17:27:26.282000 audit: BPF prog-id=235 op=LOAD Dec 12 17:27:26.282000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=236 op=LOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=237 op=LOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.283000 audit: BPF prog-id=238 op=LOAD Dec 12 17:27:26.283000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4750 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163623861663833383239383561376634373865333265383461663135 Dec 12 17:27:26.285342 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:26.286000 audit[4782]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:26.286000 audit[4782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff674adb0 a2=0 a3=1 items=0 ppid=2901 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:26.293000 audit[4782]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:26.293000 audit[4782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff674adb0 a2=0 a3=1 items=0 ppid=2901 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:26.311920 containerd[1608]: time="2025-12-12T17:27:26.311858504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpxb4,Uid:8244e43b-886c-44bc-ad6f-47d7edb4df86,Namespace:kube-system,Attempt:0,} returns sandbox id \"acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15\"" Dec 12 17:27:26.312703 kubelet[2781]: E1212 17:27:26.312680 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:26.316447 containerd[1608]: time="2025-12-12T17:27:26.316399234Z" level=info msg="CreateContainer within sandbox \"acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:26.327152 containerd[1608]: time="2025-12-12T17:27:26.326668471Z" level=info msg="Container 1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:26.327165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105432148.mount: Deactivated successfully. Dec 12 17:27:26.332416 containerd[1608]: time="2025-12-12T17:27:26.332342054Z" level=info msg="CreateContainer within sandbox \"acb8af8382985a7f478e32e84af158576a4087ec33c23e3929460f94808f9d15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1\"" Dec 12 17:27:26.334294 containerd[1608]: time="2025-12-12T17:27:26.333380182Z" level=info msg="StartContainer for \"1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1\"" Dec 12 17:27:26.334294 containerd[1608]: time="2025-12-12T17:27:26.334254702Z" level=info msg="connecting to shim 1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1" address="unix:///run/containerd/s/e72ecdb8e04b949b78e31b855924d2c264148730fe36dd2b2737c7a8580e5129" protocol=ttrpc version=3 Dec 12 17:27:26.357349 systemd[1]: Started cri-containerd-1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1.scope - libcontainer container 1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1. Dec 12 17:27:26.367000 audit: BPF prog-id=239 op=LOAD Dec 12 17:27:26.368000 audit: BPF prog-id=240 op=LOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=241 op=LOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=242 op=LOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.368000 audit: BPF prog-id=243 op=LOAD Dec 12 17:27:26.368000 audit[4789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4750 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:26.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162373739333831323638613730623866376435613366353665396438 Dec 12 17:27:26.389202 containerd[1608]: time="2025-12-12T17:27:26.389162689Z" level=info msg="StartContainer for \"1b779381268a70b8f7d5a3f56e9d8a872c0f58a21d648653ff01fbdd8c5891f1\" returns successfully" Dec 12 17:27:27.031796 containerd[1608]: time="2025-12-12T17:27:27.031737974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-t62vd,Uid:1274eff9-19f2-4a62-a07e-e088770fa339,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:27.129712 systemd-networkd[1512]: califae80406014: Gained IPv6LL Dec 12 17:27:27.152990 systemd-networkd[1512]: cali2851882c5dc: Link UP Dec 12 17:27:27.155294 systemd-networkd[1512]: cali2851882c5dc: Gained carrier Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.072 [INFO][4825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0 calico-apiserver-5f88db5b6c- calico-apiserver 1274eff9-19f2-4a62-a07e-e088770fa339 890 0 2025-12-12 17:26:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f88db5b6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f88db5b6c-t62vd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2851882c5dc [] [] }} ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.072 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.099 [INFO][4839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" HandleID="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.099 [INFO][4839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" HandleID="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c35c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f88db5b6c-t62vd", "timestamp":"2025-12-12 17:27:27.099261949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.099 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.099 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.099 [INFO][4839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.110 [INFO][4839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.115 [INFO][4839] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.122 [INFO][4839] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.124 [INFO][4839] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.127 [INFO][4839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.127 [INFO][4839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.131 [INFO][4839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9 Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.139 [INFO][4839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.147 [INFO][4839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.147 [INFO][4839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" host="localhost" Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.147 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:27.171130 containerd[1608]: 2025-12-12 17:27:27.147 [INFO][4839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" HandleID="k8s-pod-network.0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.150 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0", GenerateName:"calico-apiserver-5f88db5b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1274eff9-19f2-4a62-a07e-e088770fa339", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88db5b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f88db5b6c-t62vd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2851882c5dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.150 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.150 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2851882c5dc ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.156 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.157 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0", GenerateName:"calico-apiserver-5f88db5b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1274eff9-19f2-4a62-a07e-e088770fa339", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88db5b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9", Pod:"calico-apiserver-5f88db5b6c-t62vd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2851882c5dc", MAC:"9e:a6:ba:38:14:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:27.172015 containerd[1608]: 2025-12-12 17:27:27.168 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-t62vd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--t62vd-eth0" Dec 12 17:27:27.186000 audit[4856]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:27.186000 audit[4856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31756 a0=3 a1=ffffd8b01240 a2=0 a3=ffffbbc29fa8 items=0 ppid=4015 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.186000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:27.195096 systemd-networkd[1512]: calia117bd027d9: Gained IPv6LL Dec 12 17:27:27.197522 containerd[1608]: time="2025-12-12T17:27:27.197474552Z" level=info msg="connecting to shim 0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9" address="unix:///run/containerd/s/7582b87dda96d2f85d732c5a6af2f56c1905d288a4b317b45ede4ba008f25d26" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:27.222331 systemd[1]: Started cri-containerd-0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9.scope - libcontainer container 0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9. Dec 12 17:27:27.233000 audit: BPF prog-id=244 op=LOAD Dec 12 17:27:27.233000 audit: BPF prog-id=245 op=LOAD Dec 12 17:27:27.233000 audit[4875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.233000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:27:27.233000 audit[4875]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.233000 audit: BPF prog-id=246 op=LOAD Dec 12 17:27:27.233000 audit[4875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.234000 audit: BPF prog-id=247 op=LOAD Dec 12 17:27:27.234000 audit[4875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.234000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:27:27.234000 audit[4875]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.234000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:27:27.234000 audit[4875]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.234000 audit: BPF prog-id=248 op=LOAD Dec 12 17:27:27.234000 audit[4875]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4865 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656632303965353130383332646231316164363236393731613865 Dec 12 17:27:27.235020 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:27.241997 kubelet[2781]: E1212 17:27:27.241771 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:27.242377 kubelet[2781]: E1212 17:27:27.242356 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:27.243428 kubelet[2781]: E1212 17:27:27.243379 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:27.260821 kubelet[2781]: I1212 17:27:27.260624 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cpxb4" podStartSLOduration=38.260607008 podStartE2EDuration="38.260607008s" podCreationTimestamp="2025-12-12 17:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:27.259718168 +0000 UTC m=+44.320343778" watchObservedRunningTime="2025-12-12 17:27:27.260607008 +0000 UTC m=+44.321232578" Dec 12 17:27:27.273820 containerd[1608]: time="2025-12-12T17:27:27.273780844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-t62vd,Uid:1274eff9-19f2-4a62-a07e-e088770fa339,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0fef209e510832db11ad626971a8e4306c22513ca790f1bede458e4a676759d9\"" Dec 12 17:27:27.277216 containerd[1608]: time="2025-12-12T17:27:27.277151397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:27.289000 audit[4902]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:27.289000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8cfcdd0 a2=0 a3=1 items=0 ppid=2901 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:27.297000 audit[4902]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:27.297000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff8cfcdd0 a2=0 a3=1 items=0 ppid=2901 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:27.319000 audit[4904]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:27.319000 audit[4904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1d645d0 a2=0 a3=1 items=0 ppid=2901 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:27.331000 audit[4904]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:27.331000 audit[4904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff1d645d0 a2=0 a3=1 items=0 ppid=2901 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:27.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:27.513326 systemd-networkd[1512]: cali67fe3f32e72: Gained IPv6LL Dec 12 17:27:27.519912 containerd[1608]: time="2025-12-12T17:27:27.519852137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:27.520835 containerd[1608]: time="2025-12-12T17:27:27.520800900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:27.520942 containerd[1608]: time="2025-12-12T17:27:27.520885983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:27.521140 kubelet[2781]: E1212 17:27:27.521075 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:27.521195 kubelet[2781]: E1212 17:27:27.521141 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:27.521626 kubelet[2781]: E1212 17:27:27.521556 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shfcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88db5b6c-t62vd_calico-apiserver(1274eff9-19f2-4a62-a07e-e088770fa339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:27.522916 kubelet[2781]: E1212 17:27:27.522867 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:28.030035 containerd[1608]: time="2025-12-12T17:27:28.029979064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-zlv8l,Uid:79d51bdf-2c92-4c03-a442-924ffa919312,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:28.139953 systemd-networkd[1512]: cali46030ab47d8: Link UP Dec 12 17:27:28.141008 systemd-networkd[1512]: cali46030ab47d8: Gained carrier Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.071 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0 calico-apiserver-5f88db5b6c- calico-apiserver 79d51bdf-2c92-4c03-a442-924ffa919312 891 0 2025-12-12 17:26:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f88db5b6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f88db5b6c-zlv8l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46030ab47d8 [] [] }} ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.071 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.099 [INFO][4921] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" HandleID="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.099 [INFO][4921] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" HandleID="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f88db5b6c-zlv8l", "timestamp":"2025-12-12 17:27:28.099675463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.099 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.099 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.100 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.111 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.115 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.119 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.121 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.123 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.123 [INFO][4921] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.124 [INFO][4921] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4 Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.127 [INFO][4921] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.134 [INFO][4921] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.134 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" host="localhost" Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.134 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:28.154142 containerd[1608]: 2025-12-12 17:27:28.134 [INFO][4921] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" HandleID="k8s-pod-network.779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Workload="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.136 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0", GenerateName:"calico-apiserver-5f88db5b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"79d51bdf-2c92-4c03-a442-924ffa919312", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88db5b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f88db5b6c-zlv8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46030ab47d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.136 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.136 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46030ab47d8 ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.141 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.142 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0", GenerateName:"calico-apiserver-5f88db5b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"79d51bdf-2c92-4c03-a442-924ffa919312", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88db5b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4", Pod:"calico-apiserver-5f88db5b6c-zlv8l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46030ab47d8", MAC:"36:c8:b3:db:28:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:28.154784 containerd[1608]: 2025-12-12 17:27:28.150 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" Namespace="calico-apiserver" Pod="calico-apiserver-5f88db5b6c-zlv8l" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f88db5b6c--zlv8l-eth0" Dec 12 17:27:28.167000 audit[4936]: NETFILTER_CFG table=filter:143 family=2 entries=53 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:28.167000 audit[4936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=ffffc4fba530 a2=0 a3=ffffb8a07fa8 items=0 ppid=4015 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.167000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:28.184968 containerd[1608]: time="2025-12-12T17:27:28.184626615Z" level=info msg="connecting to shim 779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4" address="unix:///run/containerd/s/d9e7fa758506311b1bcd89a0e2d78ceb49f263fe5a8b544944d770a4efd4e89f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:28.207318 systemd[1]: Started cri-containerd-779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4.scope - libcontainer container 779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4. Dec 12 17:27:28.217000 audit: BPF prog-id=249 op=LOAD Dec 12 17:27:28.218000 audit: BPF prog-id=250 op=LOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=251 op=LOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=252 op=LOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.218000 audit: BPF prog-id=253 op=LOAD Dec 12 17:27:28.218000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737396261383165386335643362316332303331613538646337313534 Dec 12 17:27:28.219204 systemd-resolved[1271]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:27:28.245233 containerd[1608]: time="2025-12-12T17:27:28.245179930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88db5b6c-zlv8l,Uid:79d51bdf-2c92-4c03-a442-924ffa919312,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"779ba81e8c5d3b1c2031a58dc7154693929b7002f2a77b5790c720cfe5d01de4\"" Dec 12 17:27:28.245791 kubelet[2781]: E1212 17:27:28.245739 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:28.246071 kubelet[2781]: E1212 17:27:28.246045 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:28.247497 kubelet[2781]: E1212 17:27:28.247445 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:28.247618 containerd[1608]: time="2025-12-12T17:27:28.247516273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:28.354000 audit[4989]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4989 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:28.354000 audit[4989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7b822c0 a2=0 a3=1 items=0 ppid=2901 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:28.363000 audit[4989]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4989 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:28.363000 audit[4989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff7b822c0 a2=0 a3=1 items=0 ppid=2901 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:28.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:28.470263 containerd[1608]: time="2025-12-12T17:27:28.470205430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:28.471711 containerd[1608]: time="2025-12-12T17:27:28.471657494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:28.471765 containerd[1608]: time="2025-12-12T17:27:28.471713617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:28.471969 kubelet[2781]: E1212 17:27:28.471921 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:28.472048 kubelet[2781]: E1212 17:27:28.471978 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:28.472553 kubelet[2781]: E1212 17:27:28.472182 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9z6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88db5b6c-zlv8l_calico-apiserver(79d51bdf-2c92-4c03-a442-924ffa919312): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:28.472647 systemd-networkd[1512]: cali2851882c5dc: Gained IPv6LL Dec 12 17:27:28.473437 kubelet[2781]: E1212 17:27:28.473395 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:29.249628 kubelet[2781]: E1212 17:27:29.249370 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:29.250244 kubelet[2781]: E1212 17:27:29.250029 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:29.250637 kubelet[2781]: E1212 17:27:29.250480 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:29.279000 audit[4991]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:29.283851 kernel: kauditd_printk_skb: 250 callbacks suppressed Dec 12 17:27:29.283933 kernel: audit: type=1325 audit(1765560449.279:753): table=filter:146 family=2 entries=14 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:29.283985 kernel: audit: type=1300 audit(1765560449.279:753): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff3022750 a2=0 a3=1 items=0 ppid=2901 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:29.279000 audit[4991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff3022750 a2=0 a3=1 items=0 ppid=2901 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:29.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:29.289669 kernel: audit: type=1327 audit(1765560449.279:753): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:29.291000 audit[4991]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:29.295151 kernel: audit: type=1325 audit(1765560449.291:754): table=nat:147 family=2 entries=20 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:29.295214 kernel: audit: type=1300 audit(1765560449.291:754): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff3022750 a2=0 a3=1 items=0 ppid=2901 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:29.291000 audit[4991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff3022750 a2=0 a3=1 items=0 ppid=2901 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:29.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:29.300594 kernel: audit: type=1327 audit(1765560449.291:754): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:29.560293 systemd-networkd[1512]: cali46030ab47d8: Gained IPv6LL Dec 12 17:27:29.795373 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:50390.service - OpenSSH per-connection server daemon (10.0.0.1:50390). Dec 12 17:27:29.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.57:22-10.0.0.1:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:29.799154 kernel: audit: type=1130 audit(1765560449.794:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.57:22-10.0.0.1:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:29.861000 audit[4993]: USER_ACCT pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:29.863147 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 50390 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:29.865573 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:29.863000 audit[4993]: CRED_ACQ pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:29.869225 kernel: audit: type=1101 audit(1765560449.861:756): pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:29.869288 kernel: audit: type=1103 audit(1765560449.863:757): pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:29.871439 kernel: audit: type=1006 audit(1765560449.863:758): pid=4993 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 17:27:29.863000 audit[4993]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcea0f620 a2=3 a3=0 items=0 ppid=1 pid=4993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:29.863000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:29.873166 systemd-logind[1592]: New session 10 of user core. Dec 12 17:27:29.884347 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:27:29.885000 audit[4993]: USER_START pid=4993 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:29.887000 audit[4996]: CRED_ACQ pid=4996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.049310 sshd[4996]: Connection closed by 10.0.0.1 port 50390 Dec 12 17:27:30.050339 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:30.050000 audit[4993]: USER_END pid=4993 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.050000 audit[4993]: CRED_DISP pid=4993 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.060542 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:50390.service: Deactivated successfully. Dec 12 17:27:30.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.57:22-10.0.0.1:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:30.063794 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:27:30.064874 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:27:30.068806 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:50404.service - OpenSSH per-connection server daemon (10.0.0.1:50404). Dec 12 17:27:30.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.57:22-10.0.0.1:50404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:30.070485 systemd-logind[1592]: Removed session 10. Dec 12 17:27:30.118000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.119769 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 50404 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:30.120000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.120000 audit[5010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd90d2e0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:30.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:30.122189 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:30.127191 systemd-logind[1592]: New session 11 of user core. Dec 12 17:27:30.136337 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:27:30.137000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.138000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.254161 kubelet[2781]: E1212 17:27:30.253876 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:30.271841 sshd[5013]: Connection closed by 10.0.0.1 port 50404 Dec 12 17:27:30.272467 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:30.274000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.274000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.285485 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:50404.service: Deactivated successfully. Dec 12 17:27:30.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.57:22-10.0.0.1:50404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:30.288691 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:27:30.290490 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:27:30.296479 systemd-logind[1592]: Removed session 11. Dec 12 17:27:30.298092 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:50414.service - OpenSSH per-connection server daemon (10.0.0.1:50414). Dec 12 17:27:30.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.57:22-10.0.0.1:50414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:30.359000 audit[5026]: USER_ACCT pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.360908 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:30.360000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.360000 audit[5026]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd37d7390 a2=3 a3=0 items=0 ppid=1 pid=5026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:30.360000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:30.362450 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:30.368220 systemd-logind[1592]: New session 12 of user core. Dec 12 17:27:30.380341 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:27:30.382000 audit[5026]: USER_START pid=5026 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.383000 audit[5029]: CRED_ACQ pid=5029 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.532461 sshd[5029]: Connection closed by 10.0.0.1 port 50414 Dec 12 17:27:30.532833 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:30.534000 audit[5026]: USER_END pid=5026 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.535000 audit[5026]: CRED_DISP pid=5026 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:30.540496 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:27:30.540584 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:50414.service: Deactivated successfully. Dec 12 17:27:30.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.57:22-10.0.0.1:50414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:30.543528 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:27:30.546731 systemd-logind[1592]: Removed session 12. Dec 12 17:27:32.031242 containerd[1608]: time="2025-12-12T17:27:32.031201433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:32.218318 containerd[1608]: time="2025-12-12T17:27:32.218276336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:32.219610 containerd[1608]: time="2025-12-12T17:27:32.219575309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:32.219684 containerd[1608]: time="2025-12-12T17:27:32.219622111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:32.219872 kubelet[2781]: E1212 17:27:32.219809 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:32.220181 kubelet[2781]: E1212 17:27:32.219887 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:32.220181 kubelet[2781]: E1212 17:27:32.220025 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116dbdb0e33466c944d333c658c0107,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:32.222894 containerd[1608]: time="2025-12-12T17:27:32.222870202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:32.409437 containerd[1608]: time="2025-12-12T17:27:32.408891183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:32.409913 containerd[1608]: time="2025-12-12T17:27:32.409858822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:32.409991 containerd[1608]: time="2025-12-12T17:27:32.409933785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:32.410110 kubelet[2781]: E1212 17:27:32.410068 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:32.410210 kubelet[2781]: E1212 17:27:32.410147 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:32.410346 kubelet[2781]: E1212 17:27:32.410276 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:32.411480 kubelet[2781]: E1212 17:27:32.411419 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7" Dec 12 17:27:35.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.57:22-10.0.0.1:59754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.552177 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:59754.service - OpenSSH per-connection server daemon (10.0.0.1:59754). Dec 12 17:27:35.556817 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 12 17:27:35.556919 kernel: audit: type=1130 audit(1765560455.550:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.57:22-10.0.0.1:59754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.610000 audit[5050]: USER_ACCT pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.612017 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 59754 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:35.614405 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:35.612000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.618279 kernel: audit: type=1101 audit(1765560455.610:783): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.618369 kernel: audit: type=1103 audit(1765560455.612:784): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.621193 kernel: audit: type=1006 audit(1765560455.612:785): pid=5050 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:27:35.621365 kernel: audit: type=1300 audit(1765560455.612:785): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5f0f600 a2=3 a3=0 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:35.612000 audit[5050]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5f0f600 a2=3 a3=0 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:35.623184 systemd-logind[1592]: New session 13 of user core. Dec 12 17:27:35.612000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:35.626314 kernel: audit: type=1327 audit(1765560455.612:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:35.631357 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:27:35.632000 audit[5050]: USER_START pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.634000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.641043 kernel: audit: type=1105 audit(1765560455.632:786): pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.641213 kernel: audit: type=1103 audit(1765560455.634:787): pid=5053 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.713155 sshd[5053]: Connection closed by 10.0.0.1 port 59754 Dec 12 17:27:35.713852 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:35.714000 audit[5050]: USER_END pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.718570 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:59754.service: Deactivated successfully. Dec 12 17:27:35.720148 kernel: audit: type=1106 audit(1765560455.714:788): pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.720224 kernel: audit: type=1104 audit(1765560455.714:789): pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.714000 audit[5050]: CRED_DISP pid=5050 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:35.720545 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:27:35.721883 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:27:35.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.57:22-10.0.0.1:59754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:35.723700 systemd-logind[1592]: Removed session 13. Dec 12 17:27:39.030884 containerd[1608]: time="2025-12-12T17:27:39.030772006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:39.257028 containerd[1608]: time="2025-12-12T17:27:39.256968999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:39.258127 containerd[1608]: time="2025-12-12T17:27:39.258046238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:39.258207 containerd[1608]: time="2025-12-12T17:27:39.258136801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:39.258399 kubelet[2781]: E1212 17:27:39.258327 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:39.259817 kubelet[2781]: E1212 17:27:39.258400 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:39.259817 kubelet[2781]: E1212 17:27:39.258550 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7htq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-795ddb8d7d-wsdsj_calico-system(72a9762c-3e28-4065-81d7-b33bc02428ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:39.259817 kubelet[2781]: E1212 17:27:39.259701 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:40.730024 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:59756.service - OpenSSH per-connection server daemon (10.0.0.1:59756). Dec 12 17:27:40.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.57:22-10.0.0.1:59756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:40.734694 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:27:40.734774 kernel: audit: type=1130 audit(1765560460.729:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.57:22-10.0.0.1:59756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:40.800000 audit[5077]: USER_ACCT pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.801554 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 59756 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:40.804000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.805739 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:40.808283 kernel: audit: type=1101 audit(1765560460.800:792): pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.808353 kernel: audit: type=1103 audit(1765560460.804:793): pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.810149 systemd-logind[1592]: New session 14 of user core. Dec 12 17:27:40.811152 kernel: audit: type=1006 audit(1765560460.804:794): pid=5077 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:27:40.811211 kernel: audit: type=1300 audit(1765560460.804:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1fadc90 a2=3 a3=0 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.804000 audit[5077]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1fadc90 a2=3 a3=0 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:40.804000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:40.815290 kernel: audit: type=1327 audit(1765560460.804:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:40.820418 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:27:40.821000 audit[5077]: USER_START pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.823000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.830382 kernel: audit: type=1105 audit(1765560460.821:795): pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.831071 kernel: audit: type=1103 audit(1765560460.823:796): pid=5080 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.922987 sshd[5080]: Connection closed by 10.0.0.1 port 59756 Dec 12 17:27:40.923620 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:40.923000 audit[5077]: USER_END pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.928781 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:59756.service: Deactivated successfully. Dec 12 17:27:40.930483 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:27:40.932682 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:27:40.924000 audit[5077]: CRED_DISP pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.937173 kernel: audit: type=1106 audit(1765560460.923:797): pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.937254 kernel: audit: type=1104 audit(1765560460.924:798): pid=5077 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:40.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.57:22-10.0.0.1:59756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:40.937840 systemd-logind[1592]: Removed session 14. Dec 12 17:27:41.033663 containerd[1608]: time="2025-12-12T17:27:41.032220868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:41.213417 containerd[1608]: time="2025-12-12T17:27:41.213193179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:41.214564 containerd[1608]: time="2025-12-12T17:27:41.214225175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:41.214564 containerd[1608]: time="2025-12-12T17:27:41.214265416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:41.214733 kubelet[2781]: E1212 17:27:41.214519 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:41.214733 kubelet[2781]: E1212 17:27:41.214575 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:41.215811 kubelet[2781]: E1212 17:27:41.214782 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwpkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nnrnj_calico-system(15a11d37-fdc8-4b22-a7c5-9c4c4246dd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:41.215898 containerd[1608]: time="2025-12-12T17:27:41.215017003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:41.216498 kubelet[2781]: E1212 17:27:41.216275 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:41.424139 containerd[1608]: time="2025-12-12T17:27:41.423571362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:41.425170 containerd[1608]: time="2025-12-12T17:27:41.424963370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:41.425170 containerd[1608]: time="2025-12-12T17:27:41.424981851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:41.425265 kubelet[2781]: E1212 17:27:41.425196 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:41.425265 kubelet[2781]: E1212 17:27:41.425240 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:41.425414 kubelet[2781]: E1212 17:27:41.425354 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:41.428176 containerd[1608]: time="2025-12-12T17:27:41.427975836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:41.629830 containerd[1608]: time="2025-12-12T17:27:41.629720356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:41.630684 containerd[1608]: time="2025-12-12T17:27:41.630600467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:41.630757 containerd[1608]: time="2025-12-12T17:27:41.630717951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:41.631001 kubelet[2781]: E1212 17:27:41.630943 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:41.631001 kubelet[2781]: E1212 17:27:41.630994 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:41.631195 kubelet[2781]: E1212 17:27:41.631155 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:41.632481 kubelet[2781]: E1212 17:27:41.632393 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:42.031571 containerd[1608]: time="2025-12-12T17:27:42.031523363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:42.253772 containerd[1608]: time="2025-12-12T17:27:42.253646863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:42.254704 containerd[1608]: time="2025-12-12T17:27:42.254672258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:42.254704 containerd[1608]: time="2025-12-12T17:27:42.254729060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:42.254947 kubelet[2781]: E1212 17:27:42.254888 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:42.255235 kubelet[2781]: E1212 17:27:42.254949 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:42.255235 kubelet[2781]: E1212 17:27:42.255070 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shfcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88db5b6c-t62vd_calico-apiserver(1274eff9-19f2-4a62-a07e-e088770fa339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:42.256646 kubelet[2781]: E1212 17:27:42.256592 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:43.031970 containerd[1608]: time="2025-12-12T17:27:43.031612096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:43.231684 containerd[1608]: time="2025-12-12T17:27:43.231640708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:43.237944 containerd[1608]: time="2025-12-12T17:27:43.237898762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:43.238306 containerd[1608]: time="2025-12-12T17:27:43.237983325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:43.238363 kubelet[2781]: E1212 17:27:43.238319 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:43.238408 kubelet[2781]: E1212 17:27:43.238380 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:43.238591 kubelet[2781]: E1212 17:27:43.238542 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9z6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88db5b6c-zlv8l_calico-apiserver(79d51bdf-2c92-4c03-a442-924ffa919312): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:43.239716 kubelet[2781]: E1212 17:27:43.239687 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:44.035072 kubelet[2781]: E1212 17:27:44.034848 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7" Dec 12 17:27:45.939709 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:49588.service - OpenSSH per-connection server daemon (10.0.0.1:49588). Dec 12 17:27:45.942639 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:27:45.942744 kernel: audit: type=1130 audit(1765560465.938:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.57:22-10.0.0.1:49588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:45.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.57:22-10.0.0.1:49588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:46.004000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.009742 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:46.011729 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:46.013332 kernel: audit: type=1101 audit(1765560466.004:801): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.013392 kernel: audit: type=1103 audit(1765560466.010:802): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.010000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.018291 kernel: audit: type=1006 audit(1765560466.010:803): pid=5095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:27:46.010000 audit[5095]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcccb61c0 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:46.020506 systemd-logind[1592]: New session 15 of user core. Dec 12 17:27:46.022126 kernel: audit: type=1300 audit(1765560466.010:803): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcccb61c0 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:46.022179 kernel: audit: type=1327 audit(1765560466.010:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:46.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:46.031310 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:27:46.032000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.041370 kernel: audit: type=1105 audit(1765560466.032:804): pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.041433 kernel: audit: type=1103 audit(1765560466.040:805): pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.040000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.136287 sshd[5098]: Connection closed by 10.0.0.1 port 49588 Dec 12 17:27:46.136783 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:46.137000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.141285 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:49588.service: Deactivated successfully. Dec 12 17:27:46.142985 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:27:46.137000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.145041 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:27:46.146347 systemd-logind[1592]: Removed session 15. Dec 12 17:27:46.148333 kernel: audit: type=1106 audit(1765560466.137:806): pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.148449 kernel: audit: type=1104 audit(1765560466.137:807): pid=5095 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:46.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.57:22-10.0.0.1:49588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:49.284341 kubelet[2781]: E1212 17:27:49.284250 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:51.147578 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:44866.service - OpenSSH per-connection server daemon (10.0.0.1:44866). Dec 12 17:27:51.148407 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:27:51.148477 kernel: audit: type=1130 audit(1765560471.146:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.57:22-10.0.0.1:44866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.57:22-10.0.0.1:44866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.215000 audit[5144]: USER_ACCT pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.216770 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 44866 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:51.220051 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:51.218000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.224063 kernel: audit: type=1101 audit(1765560471.215:810): pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.224146 kernel: audit: type=1103 audit(1765560471.218:811): pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.226160 kernel: audit: type=1006 audit(1765560471.218:812): pid=5144 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:27:51.226218 kernel: audit: type=1300 audit(1765560471.218:812): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff81fd8a0 a2=3 a3=0 items=0 ppid=1 pid=5144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:51.218000 audit[5144]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff81fd8a0 a2=3 a3=0 items=0 ppid=1 pid=5144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:51.226288 systemd-logind[1592]: New session 16 of user core. Dec 12 17:27:51.218000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:51.230713 kernel: audit: type=1327 audit(1765560471.218:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:51.236306 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:27:51.237000 audit[5144]: USER_START pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.239000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.247076 kernel: audit: type=1105 audit(1765560471.237:813): pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.247479 kernel: audit: type=1103 audit(1765560471.239:814): pid=5147 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.354327 sshd[5147]: Connection closed by 10.0.0.1 port 44866 Dec 12 17:27:51.355392 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:51.355000 audit[5144]: USER_END pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.355000 audit[5144]: CRED_DISP pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.363999 kernel: audit: type=1106 audit(1765560471.355:815): pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.364033 kernel: audit: type=1104 audit(1765560471.355:816): pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.368351 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:44866.service: Deactivated successfully. Dec 12 17:27:51.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.57:22-10.0.0.1:44866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.371241 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:27:51.372657 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:27:51.374878 systemd-logind[1592]: Removed session 16. Dec 12 17:27:51.376803 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:44868.service - OpenSSH per-connection server daemon (10.0.0.1:44868). Dec 12 17:27:51.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.57:22-10.0.0.1:44868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.437000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.439088 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 44868 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:51.438000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.438000 audit[5160]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccc05de0 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:51.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:51.440391 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:51.445228 systemd-logind[1592]: New session 17 of user core. Dec 12 17:27:51.454357 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:27:51.455000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.457000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.620165 sshd[5163]: Connection closed by 10.0.0.1 port 44868 Dec 12 17:27:51.620548 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:51.620000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.620000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.632829 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:44868.service: Deactivated successfully. Dec 12 17:27:51.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.57:22-10.0.0.1:44868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.636342 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:27:51.638644 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:27:51.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.57:22-10.0.0.1:44874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:51.640434 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:44874.service - OpenSSH per-connection server daemon (10.0.0.1:44874). Dec 12 17:27:51.642688 systemd-logind[1592]: Removed session 17. Dec 12 17:27:51.703000 audit[5174]: USER_ACCT pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.704638 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 44874 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:51.704000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.705000 audit[5174]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff811fdb0 a2=3 a3=0 items=0 ppid=1 pid=5174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:51.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:51.706578 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:51.711230 systemd-logind[1592]: New session 18 of user core. Dec 12 17:27:51.722359 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:27:51.725000 audit[5174]: USER_START pid=5174 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:51.727000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.030093 kubelet[2781]: E1212 17:27:52.029924 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:52.292000 audit[5193]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.292000 audit[5193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffda0ca5b0 a2=0 a3=1 items=0 ppid=2901 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.299000 audit[5193]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.299000 audit[5193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffda0ca5b0 a2=0 a3=1 items=0 ppid=2901 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.305961 sshd[5177]: Connection closed by 10.0.0.1 port 44874 Dec 12 17:27:52.307389 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:52.307000 audit[5174]: USER_END pid=5174 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.308000 audit[5174]: CRED_DISP pid=5174 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.316770 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:44874.service: Deactivated successfully. Dec 12 17:27:52.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.57:22-10.0.0.1:44874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:52.317000 audit[5196]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.317000 audit[5196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe91857c0 a2=0 a3=1 items=0 ppid=2901 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.320479 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:27:52.322325 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:27:52.327837 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:44886.service - OpenSSH per-connection server daemon (10.0.0.1:44886). Dec 12 17:27:52.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.57:22-10.0.0.1:44886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:52.329699 systemd-logind[1592]: Removed session 18. Dec 12 17:27:52.330000 audit[5196]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:52.330000 audit[5196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe91857c0 a2=0 a3=1 items=0 ppid=2901 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:52.377000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.378649 sshd[5200]: Accepted publickey for core from 10.0.0.1 port 44886 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:52.378000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.378000 audit[5200]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce4dec00 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.378000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:52.380215 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:52.385054 systemd-logind[1592]: New session 19 of user core. Dec 12 17:27:52.396313 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:27:52.397000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.399000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.655318 sshd[5203]: Connection closed by 10.0.0.1 port 44886 Dec 12 17:27:52.656048 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:52.658000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.658000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.667022 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:44886.service: Deactivated successfully. Dec 12 17:27:52.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.57:22-10.0.0.1:44886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:52.669292 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:27:52.670111 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:27:52.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.57:22-10.0.0.1:44896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:52.674007 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:44896.service - OpenSSH per-connection server daemon (10.0.0.1:44896). Dec 12 17:27:52.675549 systemd-logind[1592]: Removed session 19. Dec 12 17:27:52.740148 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 44896 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:52.738000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.741000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.741000 audit[5215]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2872ae0 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:52.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:52.742702 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:52.752395 systemd-logind[1592]: New session 20 of user core. Dec 12 17:27:52.758819 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:27:52.761000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.765000 audit[5218]: CRED_ACQ pid=5218 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.877351 sshd[5218]: Connection closed by 10.0.0.1 port 44896 Dec 12 17:27:52.877219 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:52.878000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.878000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:52.882655 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:44896.service: Deactivated successfully. Dec 12 17:27:52.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.57:22-10.0.0.1:44896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:52.884564 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:27:52.886281 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:27:52.887791 systemd-logind[1592]: Removed session 20. Dec 12 17:27:53.032588 kubelet[2781]: E1212 17:27:53.032430 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:27:54.031147 kubelet[2781]: E1212 17:27:54.030900 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:27:56.030942 kubelet[2781]: E1212 17:27:56.030594 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:27:56.031332 kubelet[2781]: E1212 17:27:56.031195 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-zlv8l" podUID="79d51bdf-2c92-4c03-a442-924ffa919312" Dec 12 17:27:56.031606 kubelet[2781]: E1212 17:27:56.031575 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:27:56.922000 audit[5234]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.924744 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:27:56.924814 kernel: audit: type=1325 audit(1765560476.922:858): table=filter:152 family=2 entries=26 op=nft_register_rule pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.922000 audit[5234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd23fbcd0 a2=0 a3=1 items=0 ppid=2901 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.930664 kernel: audit: type=1300 audit(1765560476.922:858): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd23fbcd0 a2=0 a3=1 items=0 ppid=2901 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.930729 kernel: audit: type=1327 audit(1765560476.922:858): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:56.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:56.931000 audit[5234]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.931000 audit[5234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd23fbcd0 a2=0 a3=1 items=0 ppid=2901 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.938708 kernel: audit: type=1325 audit(1765560476.931:859): table=nat:153 family=2 entries=104 op=nft_register_chain pid=5234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:56.938767 kernel: audit: type=1300 audit(1765560476.931:859): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd23fbcd0 a2=0 a3=1 items=0 ppid=2901 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:56.938789 kernel: audit: type=1327 audit(1765560476.931:859): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:56.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:57.031806 kubelet[2781]: E1212 17:27:57.031747 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:27:57.033213 containerd[1608]: time="2025-12-12T17:27:57.032384974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:57.228707 containerd[1608]: time="2025-12-12T17:27:57.228644183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:57.231171 containerd[1608]: time="2025-12-12T17:27:57.231108718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:57.231276 containerd[1608]: time="2025-12-12T17:27:57.231138518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:57.231399 kubelet[2781]: E1212 17:27:57.231355 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:57.231441 kubelet[2781]: E1212 17:27:57.231399 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:57.231629 kubelet[2781]: E1212 17:27:57.231593 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116dbdb0e33466c944d333c658c0107,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:57.233978 containerd[1608]: time="2025-12-12T17:27:57.233913210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:57.423443 containerd[1608]: time="2025-12-12T17:27:57.423397967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:57.424402 containerd[1608]: time="2025-12-12T17:27:57.424307478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:57.424402 containerd[1608]: time="2025-12-12T17:27:57.424346877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:57.424611 kubelet[2781]: E1212 17:27:57.424549 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:57.424658 kubelet[2781]: E1212 17:27:57.424605 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:57.424782 kubelet[2781]: E1212 17:27:57.424746 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djlw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5649bfbdf7-hfvhj_calico-system(6cfb895d-8011-4a5d-b5ce-fee54d2880b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:57.426015 kubelet[2781]: E1212 17:27:57.425946 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7" Dec 12 17:27:57.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.57:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:57.889711 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:45032.service - OpenSSH per-connection server daemon (10.0.0.1:45032). Dec 12 17:27:57.893151 kernel: audit: type=1130 audit(1765560477.888:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.57:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:57.964000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:57.965843 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 45032 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:27:57.968096 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:57.966000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:57.971970 kernel: audit: type=1101 audit(1765560477.964:861): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:57.972030 kernel: audit: type=1103 audit(1765560477.966:862): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:57.972048 kernel: audit: type=1006 audit(1765560477.966:863): pid=5237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:27:57.966000 audit[5237]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec7a24f0 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:57.966000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:57.973970 systemd-logind[1592]: New session 21 of user core. Dec 12 17:27:57.989335 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:27:57.991000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:57.993000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:58.093985 sshd[5240]: Connection closed by 10.0.0.1 port 45032 Dec 12 17:27:58.094600 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:58.095000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:58.095000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:27:58.100248 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:45032.service: Deactivated successfully. Dec 12 17:27:58.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.57:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:58.103738 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:27:58.104521 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:27:58.105699 systemd-logind[1592]: Removed session 21. Dec 12 17:28:03.111746 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:36770.service - OpenSSH per-connection server daemon (10.0.0.1:36770). Dec 12 17:28:03.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.57:22-10.0.0.1:36770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:03.112924 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:28:03.112972 kernel: audit: type=1130 audit(1765560483.110:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.57:22-10.0.0.1:36770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:03.171000 audit[5260]: USER_ACCT pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.172947 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 36770 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:28:03.175090 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:03.173000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.179260 kernel: audit: type=1101 audit(1765560483.171:870): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.179308 kernel: audit: type=1103 audit(1765560483.173:871): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.181336 kernel: audit: type=1006 audit(1765560483.173:872): pid=5260 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:28:03.181409 kernel: audit: type=1300 audit(1765560483.173:872): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcabe2510 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.173000 audit[5260]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcabe2510 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:03.180632 systemd-logind[1592]: New session 22 of user core. Dec 12 17:28:03.184415 kernel: audit: type=1327 audit(1765560483.173:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:03.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:03.194430 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:28:03.196000 audit[5260]: USER_START pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.198000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.204958 kernel: audit: type=1105 audit(1765560483.196:873): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.205041 kernel: audit: type=1103 audit(1765560483.198:874): pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.300740 sshd[5263]: Connection closed by 10.0.0.1 port 36770 Dec 12 17:28:03.301098 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:03.301000 audit[5260]: USER_END pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.305367 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:36770.service: Deactivated successfully. Dec 12 17:28:03.301000 audit[5260]: CRED_DISP pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.311112 kernel: audit: type=1106 audit(1765560483.301:875): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.311182 kernel: audit: type=1104 audit(1765560483.301:876): pid=5260 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:03.307977 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:28:03.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.57:22-10.0.0.1:36770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:03.311474 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:28:03.312721 systemd-logind[1592]: Removed session 22. Dec 12 17:28:04.031335 containerd[1608]: time="2025-12-12T17:28:04.031255272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:04.235101 containerd[1608]: time="2025-12-12T17:28:04.235053636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:04.236506 containerd[1608]: time="2025-12-12T17:28:04.236385032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:04.236506 containerd[1608]: time="2025-12-12T17:28:04.236457792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:04.236840 kubelet[2781]: E1212 17:28:04.236794 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:04.237479 kubelet[2781]: E1212 17:28:04.237216 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:04.237704 kubelet[2781]: E1212 17:28:04.237632 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7htq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-795ddb8d7d-wsdsj_calico-system(72a9762c-3e28-4065-81d7-b33bc02428ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:04.238859 kubelet[2781]: E1212 17:28:04.238814 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-795ddb8d7d-wsdsj" podUID="72a9762c-3e28-4065-81d7-b33bc02428ff" Dec 12 17:28:07.032630 containerd[1608]: time="2025-12-12T17:28:07.032564670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:07.236109 containerd[1608]: time="2025-12-12T17:28:07.236059448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:07.237885 containerd[1608]: time="2025-12-12T17:28:07.237836366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:07.237974 containerd[1608]: time="2025-12-12T17:28:07.237927366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:07.238131 kubelet[2781]: E1212 17:28:07.238082 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:07.238533 kubelet[2781]: E1212 17:28:07.238173 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:07.238533 kubelet[2781]: E1212 17:28:07.238477 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:07.238653 containerd[1608]: time="2025-12-12T17:28:07.238456325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:07.442563 containerd[1608]: time="2025-12-12T17:28:07.442520023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:07.445083 containerd[1608]: time="2025-12-12T17:28:07.444890661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:07.445083 containerd[1608]: time="2025-12-12T17:28:07.444958381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:07.445405 kubelet[2781]: E1212 17:28:07.445356 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:07.445469 kubelet[2781]: E1212 17:28:07.445426 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:07.446265 kubelet[2781]: E1212 17:28:07.446007 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shfcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88db5b6c-t62vd_calico-apiserver(1274eff9-19f2-4a62-a07e-e088770fa339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:07.446551 containerd[1608]: time="2025-12-12T17:28:07.446474259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:07.447320 kubelet[2781]: E1212 17:28:07.447288 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88db5b6c-t62vd" podUID="1274eff9-19f2-4a62-a07e-e088770fa339" Dec 12 17:28:07.709191 containerd[1608]: time="2025-12-12T17:28:07.709041784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:07.711411 containerd[1608]: time="2025-12-12T17:28:07.711345782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:07.711523 containerd[1608]: time="2025-12-12T17:28:07.711480982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:07.711665 kubelet[2781]: E1212 17:28:07.711620 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:07.711713 kubelet[2781]: E1212 17:28:07.711674 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:07.711845 kubelet[2781]: E1212 17:28:07.711802 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgzfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffhww_calico-system(5fefc895-2faa-4b6c-b800-5fdfceed3426): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:07.713172 kubelet[2781]: E1212 17:28:07.713103 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffhww" podUID="5fefc895-2faa-4b6c-b800-5fdfceed3426" Dec 12 17:28:08.313393 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:36912.service - OpenSSH per-connection server daemon (10.0.0.1:36912). Dec 12 17:28:08.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.57:22-10.0.0.1:36912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:08.317509 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:08.317608 kernel: audit: type=1130 audit(1765560488.312:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.57:22-10.0.0.1:36912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:08.381000 audit[5279]: USER_ACCT pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.383318 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 36912 ssh2: RSA SHA256:y94zbphlCBI0q2g7nmHRayskfE9ySmXK/cGzHDOY3Lg Dec 12 17:28:08.385217 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:08.383000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.389699 kernel: audit: type=1101 audit(1765560488.381:879): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.389775 kernel: audit: type=1103 audit(1765560488.383:880): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.392063 kernel: audit: type=1006 audit(1765560488.383:881): pid=5279 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:28:08.392404 kernel: audit: type=1300 audit(1765560488.383:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb2693b0 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:08.383000 audit[5279]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb2693b0 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:08.394808 systemd-logind[1592]: New session 23 of user core. Dec 12 17:28:08.383000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:08.396731 kernel: audit: type=1327 audit(1765560488.383:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:08.404337 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:28:08.408000 audit[5279]: USER_START pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.410000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.421297 kernel: audit: type=1105 audit(1765560488.408:882): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.421377 kernel: audit: type=1103 audit(1765560488.410:883): pid=5282 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.522400 sshd[5282]: Connection closed by 10.0.0.1 port 36912 Dec 12 17:28:08.524314 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:08.524000 audit[5279]: USER_END pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.528886 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:28:08.529281 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:36912.service: Deactivated successfully. Dec 12 17:28:08.524000 audit[5279]: CRED_DISP pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.532323 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:28:08.532923 kernel: audit: type=1106 audit(1765560488.524:884): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.532974 kernel: audit: type=1104 audit(1765560488.524:885): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:28:08.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.57:22-10.0.0.1:36912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:08.534185 systemd-logind[1592]: Removed session 23. Dec 12 17:28:09.032602 containerd[1608]: time="2025-12-12T17:28:09.031263041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:09.237355 containerd[1608]: time="2025-12-12T17:28:09.237310964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:09.238765 containerd[1608]: time="2025-12-12T17:28:09.238727004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:09.238870 containerd[1608]: time="2025-12-12T17:28:09.238814964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:09.239132 kubelet[2781]: E1212 17:28:09.238982 2781 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:09.239132 kubelet[2781]: E1212 17:28:09.239036 2781 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:09.239929 kubelet[2781]: E1212 17:28:09.239854 2781 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwpkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nnrnj_calico-system(15a11d37-fdc8-4b22-a7c5-9c4c4246dd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:09.241064 kubelet[2781]: E1212 17:28:09.241025 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nnrnj" podUID="15a11d37-fdc8-4b22-a7c5-9c4c4246dd24" Dec 12 17:28:10.031611 kubelet[2781]: E1212 17:28:10.031551 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5649bfbdf7-hfvhj" podUID="6cfb895d-8011-4a5d-b5ce-fee54d2880b7"