Dec 16 12:23:55.336758 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:23:55.336785 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:23:55.336793 kernel: KASLR enabled Dec 16 12:23:55.336799 kernel: efi: EFI v2.7 by EDK II Dec 16 12:23:55.336805 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:23:55.336811 kernel: random: crng init done Dec 16 12:23:55.336818 kernel: secureboot: Secure boot disabled Dec 16 12:23:55.336824 kernel: ACPI: Early table checksum verification disabled Dec 16 12:23:55.336832 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:23:55.336838 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:23:55.336844 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336850 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336856 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336862 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336871 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336878 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336884 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336891 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336897 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:55.336904 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:23:55.337004 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:23:55.337011 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:55.337021 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:23:55.337028 kernel: Zone ranges: Dec 16 12:23:55.337034 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:55.337040 kernel: DMA32 empty Dec 16 12:23:55.337046 kernel: Normal empty Dec 16 12:23:55.337052 kernel: Device empty Dec 16 12:23:55.337059 kernel: Movable zone start for each node Dec 16 12:23:55.337065 kernel: Early memory node ranges Dec 16 12:23:55.337071 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:23:55.337078 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:23:55.337084 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:23:55.337091 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:23:55.337099 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:23:55.337105 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:23:55.337111 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:23:55.337117 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:23:55.337124 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:23:55.337130 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:23:55.337141 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:23:55.337148 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:23:55.337155 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:23:55.337162 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:55.337168 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:23:55.337175 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:23:55.337183 kernel: psci: probing for conduit method from ACPI. Dec 16 12:23:55.337190 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:23:55.337198 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:23:55.337206 kernel: psci: Trusted OS migration not required Dec 16 12:23:55.337212 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:23:55.337219 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:23:55.337226 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:23:55.337233 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:23:55.337240 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:23:55.337247 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:23:55.337254 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:23:55.337261 kernel: CPU features: detected: Spectre-v4 Dec 16 12:23:55.337268 kernel: CPU features: detected: Spectre-BHB Dec 16 12:23:55.337277 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:23:55.337284 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:23:55.337291 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:23:55.337298 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:23:55.337306 kernel: alternatives: applying boot alternatives Dec 16 12:23:55.337314 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:23:55.337321 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:23:55.337328 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:23:55.337335 kernel: Fallback order for Node 0: 0 Dec 16 12:23:55.337341 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:23:55.337349 kernel: Policy zone: DMA Dec 16 12:23:55.337356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:23:55.337363 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:23:55.337370 kernel: software IO TLB: area num 4. Dec 16 12:23:55.337377 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:23:55.337384 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:23:55.337391 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:23:55.337398 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:23:55.337406 kernel: rcu: RCU event tracing is enabled. Dec 16 12:23:55.337413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:23:55.337420 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:23:55.337428 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:23:55.337436 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:23:55.337443 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:23:55.337450 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:23:55.337457 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:23:55.337464 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:23:55.337470 kernel: GICv3: 256 SPIs implemented Dec 16 12:23:55.337477 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:23:55.337484 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:23:55.337491 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:23:55.337498 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:23:55.337507 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:23:55.337514 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:23:55.337521 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:23:55.337529 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:23:55.337536 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:23:55.337544 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:23:55.337551 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:23:55.337558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:55.337565 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:23:55.337572 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:23:55.337580 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:23:55.337588 kernel: arm-pv: using stolen time PV Dec 16 12:23:55.337606 kernel: Console: colour dummy device 80x25 Dec 16 12:23:55.337614 kernel: ACPI: Core revision 20240827 Dec 16 12:23:55.337622 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:23:55.337629 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:23:55.337637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:23:55.337644 kernel: landlock: Up and running. Dec 16 12:23:55.337651 kernel: SELinux: Initializing. Dec 16 12:23:55.337661 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:23:55.337669 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:23:55.337677 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:23:55.337684 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:23:55.337692 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:23:55.337699 kernel: Remapping and enabling EFI services. Dec 16 12:23:55.337706 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:23:55.337716 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:23:55.337728 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:23:55.337737 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:23:55.337745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:55.337752 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:23:55.337760 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:23:55.337769 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:23:55.337779 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:23:55.337786 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:55.337794 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:23:55.337801 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:23:55.337809 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:23:55.337817 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:23:55.337825 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:55.337834 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:23:55.337841 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:23:55.337849 kernel: SMP: Total of 4 processors activated. Dec 16 12:23:55.337856 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:23:55.337864 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:23:55.337872 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:23:55.337880 kernel: CPU features: detected: Common not Private translations Dec 16 12:23:55.337889 kernel: CPU features: detected: CRC32 instructions Dec 16 12:23:55.337897 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:23:55.337911 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:23:55.337920 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:23:55.337928 kernel: CPU features: detected: Privileged Access Never Dec 16 12:23:55.337936 kernel: CPU features: detected: RAS Extension Support Dec 16 12:23:55.337944 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:23:55.337952 kernel: alternatives: applying system-wide alternatives Dec 16 12:23:55.337962 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:23:55.337970 kernel: Memory: 2450912K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 99040K reserved, 16384K cma-reserved) Dec 16 12:23:55.337977 kernel: devtmpfs: initialized Dec 16 12:23:55.337985 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:23:55.337993 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:23:55.338001 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:23:55.338008 kernel: 0 pages in range for non-PLT usage Dec 16 12:23:55.338017 kernel: 515184 pages in range for PLT usage Dec 16 12:23:55.338025 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:23:55.338033 kernel: SMBIOS 3.0.0 present. Dec 16 12:23:55.338040 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:23:55.338048 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:23:55.338056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:23:55.338123 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:23:55.338139 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:23:55.338147 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:23:55.338155 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:23:55.338163 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 16 12:23:55.338170 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:23:55.338178 kernel: cpuidle: using governor menu Dec 16 12:23:55.338185 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:23:55.338195 kernel: ASID allocator initialised with 32768 entries Dec 16 12:23:55.338203 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:23:55.338210 kernel: Serial: AMBA PL011 UART driver Dec 16 12:23:55.338218 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:23:55.338226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:23:55.338233 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:23:55.338241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:23:55.338250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:23:55.338258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:23:55.338266 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:23:55.338274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:23:55.338282 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:23:55.338289 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:23:55.338297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:23:55.338305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:23:55.338314 kernel: ACPI: Interpreter enabled Dec 16 12:23:55.338322 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:23:55.338330 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:23:55.338337 kernel: ACPI: CPU0 has been hot-added Dec 16 12:23:55.338346 kernel: ACPI: CPU1 has been hot-added Dec 16 12:23:55.338353 kernel: ACPI: CPU2 has been hot-added Dec 16 12:23:55.338361 kernel: ACPI: CPU3 has been hot-added Dec 16 12:23:55.338372 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:23:55.338379 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:23:55.338387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:23:55.338597 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:23:55.338884 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:23:55.339023 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:23:55.339113 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:23:55.339196 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:23:55.339212 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:23:55.339221 kernel: PCI host bridge to bus 0000:00 Dec 16 12:23:55.339318 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:23:55.339406 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:23:55.339501 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:23:55.339586 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:23:55.339704 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:23:55.339807 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:23:55.339995 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:23:55.340094 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:23:55.340193 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:23:55.340273 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:23:55.340354 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:23:55.340435 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:23:55.340514 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:23:55.340717 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:23:55.340873 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:23:55.340887 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:23:55.340896 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:23:55.340924 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:23:55.340935 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:23:55.340943 kernel: iommu: Default domain type: Translated Dec 16 12:23:55.340956 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:23:55.340964 kernel: efivars: Registered efivars operations Dec 16 12:23:55.340972 kernel: vgaarb: loaded Dec 16 12:23:55.340980 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:23:55.341066 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:23:55.341079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:23:55.341087 kernel: pnp: PnP ACPI init Dec 16 12:23:55.341411 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:23:55.341462 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:23:55.341472 kernel: NET: Registered PF_INET protocol family Dec 16 12:23:55.341480 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:23:55.341488 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:23:55.341497 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:23:55.341505 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:23:55.341516 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:23:55.341524 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:23:55.341533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:23:55.341540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:23:55.341548 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:23:55.341557 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:23:55.341565 kernel: kvm [1]: HYP mode not available Dec 16 12:23:55.341579 kernel: Initialise system trusted keyrings Dec 16 12:23:55.341588 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:23:55.341602 kernel: Key type asymmetric registered Dec 16 12:23:55.341611 kernel: Asymmetric key parser 'x509' registered Dec 16 12:23:55.341619 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:23:55.341628 kernel: io scheduler mq-deadline registered Dec 16 12:23:55.341635 kernel: io scheduler kyber registered Dec 16 12:23:55.341645 kernel: io scheduler bfq registered Dec 16 12:23:55.341653 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:23:55.341661 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:23:55.341670 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:23:55.341784 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:23:55.341797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:23:55.341805 kernel: thunder_xcv, ver 1.0 Dec 16 12:23:55.341815 kernel: thunder_bgx, ver 1.0 Dec 16 12:23:55.341823 kernel: nicpf, ver 1.0 Dec 16 12:23:55.341831 kernel: nicvf, ver 1.0 Dec 16 12:23:55.341948 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:23:55.342038 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:23:54 UTC (1765887834) Dec 16 12:23:55.342048 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:23:55.342059 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:23:55.342067 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:23:55.342075 kernel: watchdog: NMI not fully supported Dec 16 12:23:55.342082 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:23:55.342090 kernel: Segment Routing with IPv6 Dec 16 12:23:55.342098 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:23:55.342106 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:23:55.342113 kernel: Key type dns_resolver registered Dec 16 12:23:55.342122 kernel: registered taskstats version 1 Dec 16 12:23:55.342129 kernel: Loading compiled-in X.509 certificates Dec 16 12:23:55.342137 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:23:55.342145 kernel: Demotion targets for Node 0: null Dec 16 12:23:55.342153 kernel: Key type .fscrypt registered Dec 16 12:23:55.342160 kernel: Key type fscrypt-provisioning registered Dec 16 12:23:55.342168 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:23:55.342177 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:23:55.342185 kernel: ima: No architecture policies found Dec 16 12:23:55.342193 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:23:55.342200 kernel: clk: Disabling unused clocks Dec 16 12:23:55.342208 kernel: PM: genpd: Disabling unused power domains Dec 16 12:23:55.342215 kernel: Freeing unused kernel memory: 12416K Dec 16 12:23:55.342223 kernel: Run /init as init process Dec 16 12:23:55.342232 kernel: with arguments: Dec 16 12:23:55.342240 kernel: /init Dec 16 12:23:55.342247 kernel: with environment: Dec 16 12:23:55.342255 kernel: HOME=/ Dec 16 12:23:55.342262 kernel: TERM=linux Dec 16 12:23:55.342363 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:23:55.342441 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 12:23:55.342454 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:23:55.342462 kernel: GPT:16515071 != 27000831 Dec 16 12:23:55.342470 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:23:55.342477 kernel: GPT:16515071 != 27000831 Dec 16 12:23:55.342485 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:23:55.342492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:23:55.342502 kernel: SCSI subsystem initialized Dec 16 12:23:55.342510 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:23:55.342518 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:23:55.342526 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:23:55.342533 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:23:55.342541 kernel: raid6: neonx8 gen() 15779 MB/s Dec 16 12:23:55.342549 kernel: raid6: neonx4 gen() 15432 MB/s Dec 16 12:23:55.342558 kernel: raid6: neonx2 gen() 8345 MB/s Dec 16 12:23:55.342565 kernel: raid6: neonx1 gen() 10164 MB/s Dec 16 12:23:55.342573 kernel: raid6: int64x8 gen() 5560 MB/s Dec 16 12:23:55.342581 kernel: raid6: int64x4 gen() 7041 MB/s Dec 16 12:23:55.342599 kernel: raid6: int64x2 gen() 5994 MB/s Dec 16 12:23:55.342609 kernel: raid6: int64x1 gen() 4844 MB/s Dec 16 12:23:55.342617 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Dec 16 12:23:55.342627 kernel: raid6: .... xor() 12005 MB/s, rmw enabled Dec 16 12:23:55.342635 kernel: raid6: using neon recovery algorithm Dec 16 12:23:55.342643 kernel: xor: measuring software checksum speed Dec 16 12:23:55.342650 kernel: 8regs : 21415 MB/sec Dec 16 12:23:55.342658 kernel: 32regs : 21653 MB/sec Dec 16 12:23:55.342666 kernel: arm64_neon : 20093 MB/sec Dec 16 12:23:55.342674 kernel: xor: using function: 32regs (21653 MB/sec) Dec 16 12:23:55.342682 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:23:55.342691 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (205) Dec 16 12:23:55.342699 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:23:55.342707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:55.342714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:23:55.342722 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:23:55.342730 kernel: loop: module loaded Dec 16 12:23:55.342737 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:23:55.342747 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:23:55.342756 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:23:55.342766 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:23:55.342775 systemd[1]: Detected virtualization kvm. Dec 16 12:23:55.342783 systemd[1]: Detected architecture arm64. Dec 16 12:23:55.342793 systemd[1]: Running in initrd. Dec 16 12:23:55.342802 systemd[1]: No hostname configured, using default hostname. Dec 16 12:23:55.342810 systemd[1]: Hostname set to . Dec 16 12:23:55.342818 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:23:55.342827 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:23:55.342835 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:23:55.342843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:23:55.342853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:23:55.342862 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:23:55.342871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:23:55.342880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:23:55.342889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:23:55.342899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:23:55.342916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:23:55.342939 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:23:55.342948 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:23:55.342956 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:23:55.342964 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:23:55.342972 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:23:55.342983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:23:55.342991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:23:55.343000 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:23:55.343008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:23:55.343025 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:23:55.343037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:23:55.343046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:23:55.343055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:23:55.343064 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:23:55.343074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:23:55.343082 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:23:55.343091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:23:55.343102 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:23:55.343111 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:23:55.343120 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:23:55.343128 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:23:55.343137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:23:55.343147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:55.343157 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:23:55.343166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:23:55.343174 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:23:55.343208 systemd-journald[345]: Collecting audit messages is enabled. Dec 16 12:23:55.343231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:23:55.343240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:23:55.343249 systemd-journald[345]: Journal started Dec 16 12:23:55.343269 systemd-journald[345]: Runtime Journal (/run/log/journal/fa65b261b55e4c0e8e5171e90f91b1de) is 6M, max 48.5M, 42.4M free. Dec 16 12:23:55.350995 kernel: Bridge firewalling registered Dec 16 12:23:55.344721 systemd-modules-load[346]: Inserted module 'br_netfilter' Dec 16 12:23:55.353791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:23:55.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.356949 kernel: audit: type=1130 audit(1765887835.352:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.356974 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:23:55.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.360680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:55.365233 kernel: audit: type=1130 audit(1765887835.357:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.365264 kernel: audit: type=1130 audit(1765887835.360:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.365074 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:23:55.367009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:23:55.368655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:23:55.381704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:23:55.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.387125 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:23:55.390148 kernel: audit: type=1130 audit(1765887835.382:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.395773 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:23:55.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.397699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:23:55.402002 kernel: audit: type=1130 audit(1765887835.398:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.401180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:23:55.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.406498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:23:55.410832 kernel: audit: type=1130 audit(1765887835.403:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.410862 kernel: audit: type=1130 audit(1765887835.407:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.408045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:23:55.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.414271 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:23:55.416604 kernel: audit: type=1130 audit(1765887835.411:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.416631 kernel: audit: type=1334 audit(1765887835.415:10): prog-id=6 op=LOAD Dec 16 12:23:55.415000 audit: BPF prog-id=6 op=LOAD Dec 16 12:23:55.417292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:23:55.442296 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:23:55.467466 systemd-resolved[388]: Positive Trust Anchors: Dec 16 12:23:55.467489 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:23:55.467492 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:23:55.467525 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:23:55.493103 systemd-resolved[388]: Defaulting to hostname 'linux'. Dec 16 12:23:55.494118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:23:55.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.495197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:23:55.535942 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:23:55.544936 kernel: iscsi: registered transport (tcp) Dec 16 12:23:55.563935 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:23:55.564005 kernel: QLogic iSCSI HBA Driver Dec 16 12:23:55.588392 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:23:55.605055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:23:55.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.607271 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:23:55.656639 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:23:55.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.659344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:23:55.661225 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:23:55.716020 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:23:55.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.717000 audit: BPF prog-id=7 op=LOAD Dec 16 12:23:55.717000 audit: BPF prog-id=8 op=LOAD Dec 16 12:23:55.719198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:23:55.752724 systemd-udevd[628]: Using default interface naming scheme 'v257'. Dec 16 12:23:55.760889 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:23:55.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.763323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:23:55.794031 dracut-pre-trigger[690]: rd.md=0: removing MD RAID activation Dec 16 12:23:55.799245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:23:55.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.800000 audit: BPF prog-id=9 op=LOAD Dec 16 12:23:55.803938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:23:55.829854 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:23:55.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.832443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:23:55.850322 systemd-networkd[742]: lo: Link UP Dec 16 12:23:55.850331 systemd-networkd[742]: lo: Gained carrier Dec 16 12:23:55.850874 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:23:55.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.852099 systemd[1]: Reached target network.target - Network. Dec 16 12:23:55.909482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:23:55.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:55.913415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:23:55.956397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:23:55.974421 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:23:55.981397 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:23:55.990732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:23:55.999087 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:23:56.018225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:23:56.018363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:56.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:56.021260 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:56.024882 disk-uuid[802]: Primary Header is updated. Dec 16 12:23:56.024882 disk-uuid[802]: Secondary Entries is updated. Dec 16 12:23:56.024882 disk-uuid[802]: Secondary Header is updated. Dec 16 12:23:56.024924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:56.036347 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:23:56.036361 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:23:56.039046 systemd-networkd[742]: eth0: Link UP Dec 16 12:23:56.039335 systemd-networkd[742]: eth0: Gained carrier Dec 16 12:23:56.039353 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:23:56.059005 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:23:56.072203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:56.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:56.096181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:23:56.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:56.097992 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:23:56.099329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:23:56.101334 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:23:56.104450 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:23:56.141008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:23:56.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.063844 disk-uuid[804]: Warning: The kernel is still using the old partition table. Dec 16 12:23:57.063844 disk-uuid[804]: The new table will be used at the next reboot or after you Dec 16 12:23:57.063844 disk-uuid[804]: run partprobe(8) or kpartx(8) Dec 16 12:23:57.063844 disk-uuid[804]: The operation has completed successfully. Dec 16 12:23:57.070027 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:23:57.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.070144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:23:57.072426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:23:57.108935 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Dec 16 12:23:57.112630 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:23:57.112699 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:57.115933 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:57.115994 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:57.122085 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:23:57.122617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:23:57.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.124777 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:23:57.238012 ignition[855]: Ignition 2.22.0 Dec 16 12:23:57.238030 ignition[855]: Stage: fetch-offline Dec 16 12:23:57.238082 ignition[855]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:57.238093 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:57.238278 ignition[855]: parsed url from cmdline: "" Dec 16 12:23:57.238282 ignition[855]: no config URL provided Dec 16 12:23:57.238287 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:23:57.238295 ignition[855]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:23:57.238337 ignition[855]: op(1): [started] loading QEMU firmware config module Dec 16 12:23:57.238343 ignition[855]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:23:57.245217 ignition[855]: op(1): [finished] loading QEMU firmware config module Dec 16 12:23:57.291054 ignition[855]: parsing config with SHA512: 989ddb649541ddf3d87a950b2881b789dbd5ef1dcf7709822d3eb7f79d0cd426099621f78478eb30f40d32d2fc2e6621cae18af8c372771051193a8e97029495 Dec 16 12:23:57.295569 unknown[855]: fetched base config from "system" Dec 16 12:23:57.295597 unknown[855]: fetched user config from "qemu" Dec 16 12:23:57.296194 ignition[855]: fetch-offline: fetch-offline passed Dec 16 12:23:57.296272 ignition[855]: Ignition finished successfully Dec 16 12:23:57.298088 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:23:57.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.299740 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:23:57.300785 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:23:57.345149 ignition[866]: Ignition 2.22.0 Dec 16 12:23:57.345169 ignition[866]: Stage: kargs Dec 16 12:23:57.345336 ignition[866]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:57.345345 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:57.346230 ignition[866]: kargs: kargs passed Dec 16 12:23:57.346285 ignition[866]: Ignition finished successfully Dec 16 12:23:57.348972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:23:57.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.351488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:23:57.388624 ignition[874]: Ignition 2.22.0 Dec 16 12:23:57.388640 ignition[874]: Stage: disks Dec 16 12:23:57.388805 ignition[874]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:57.388814 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:57.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.391668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:23:57.389666 ignition[874]: disks: disks passed Dec 16 12:23:57.392954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:23:57.389728 ignition[874]: Ignition finished successfully Dec 16 12:23:57.394966 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:23:57.396789 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:23:57.398376 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:23:57.400481 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:23:57.403891 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:23:57.446504 systemd-fsck[884]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 12:23:57.454529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:23:57.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.457572 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:23:57.540193 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:23:57.540988 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:23:57.542000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:23:57.546273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:23:57.548206 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:23:57.549269 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:23:57.549314 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:23:57.549344 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:23:57.561291 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:23:57.564108 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:23:57.569532 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Dec 16 12:23:57.569571 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:23:57.569592 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:57.574029 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:57.574088 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:57.575301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:23:57.608432 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:23:57.613633 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:23:57.618207 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:23:57.622348 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:23:57.731030 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:23:57.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.733652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:23:57.735948 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:23:57.762563 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:23:57.764561 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:23:57.782933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:23:57.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.798708 ignition[1005]: INFO : Ignition 2.22.0 Dec 16 12:23:57.798708 ignition[1005]: INFO : Stage: mount Dec 16 12:23:57.800355 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:57.800355 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:57.800355 ignition[1005]: INFO : mount: mount passed Dec 16 12:23:57.800355 ignition[1005]: INFO : Ignition finished successfully Dec 16 12:23:57.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:57.803157 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:23:57.805310 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:23:58.065079 systemd-networkd[742]: eth0: Gained IPv6LL Dec 16 12:23:58.548496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:23:58.579220 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Dec 16 12:23:58.579292 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:23:58.579306 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:58.583001 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:58.583074 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:58.584644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:23:58.630688 ignition[1036]: INFO : Ignition 2.22.0 Dec 16 12:23:58.630688 ignition[1036]: INFO : Stage: files Dec 16 12:23:58.632551 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:58.632551 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:58.632551 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:23:58.635956 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:23:58.635956 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:23:58.640097 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:23:58.642270 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:23:58.643613 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:23:58.642939 unknown[1036]: wrote ssh authorized keys file for user: core Dec 16 12:23:58.647437 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:23:58.649515 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:23:58.705040 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:23:58.816226 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:23:58.816226 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:23:58.820613 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:23:58.890632 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:23:58.893220 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:23:58.893220 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:23:58.911632 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:23:58.911632 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:23:58.917143 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:23:59.279930 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:23:59.517963 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:23:59.517963 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:23:59.521670 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:23:59.524160 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:23:59.524160 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:23:59.524160 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:23:59.528809 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:23:59.528809 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:23:59.528809 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:23:59.528809 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:23:59.552213 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:23:59.557470 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:23:59.560191 ignition[1036]: INFO : files: files passed Dec 16 12:23:59.560191 ignition[1036]: INFO : Ignition finished successfully Dec 16 12:23:59.575114 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 12:23:59.575154 kernel: audit: type=1130 audit(1765887839.561:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.560670 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:23:59.564647 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:23:59.568973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:23:59.587461 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:23:59.587629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:23:59.595645 kernel: audit: type=1130 audit(1765887839.588:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.595677 kernel: audit: type=1131 audit(1765887839.588:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.598159 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:23:59.601555 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:59.601555 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:59.605275 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:59.607801 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:23:59.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.610213 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:23:59.614717 kernel: audit: type=1130 audit(1765887839.608:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.614814 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:23:59.694467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:23:59.694608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:23:59.702203 kernel: audit: type=1130 audit(1765887839.696:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.702232 kernel: audit: type=1131 audit(1765887839.696:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.696898 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:23:59.703159 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:23:59.705154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:23:59.706176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:23:59.749333 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:23:59.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.752103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:23:59.757097 kernel: audit: type=1130 audit(1765887839.750:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.777276 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:23:59.777524 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:23:59.779851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:23:59.781963 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:23:59.784000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:23:59.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.784193 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:23:59.790737 kernel: audit: type=1131 audit(1765887839.785:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.789659 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:23:59.791827 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:23:59.793561 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:23:59.795138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:23:59.797072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:23:59.799002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:23:59.800963 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:23:59.803019 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:23:59.804973 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:23:59.807063 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:23:59.808927 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:23:59.810463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:23:59.814966 kernel: audit: type=1131 audit(1765887839.811:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.810620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:23:59.815116 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:23:59.817019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:23:59.818923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:23:59.823393 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:23:59.828797 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:23:59.830968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:23:59.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.835539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:23:59.836569 kernel: audit: type=1131 audit(1765887839.831:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.836441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:23:59.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.837949 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:23:59.839617 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:23:59.844016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:23:59.845532 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:23:59.847677 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:23:59.849108 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:23:59.849247 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:23:59.850776 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:23:59.850903 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:23:59.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.852780 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:23:59.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.852891 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:23:59.854992 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:23:59.855167 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:23:59.856867 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:23:59.857063 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:23:59.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.859706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:23:59.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.862733 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:23:59.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.864774 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:23:59.865016 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:23:59.867098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:23:59.867272 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:23:59.868701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:23:59.868859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:23:59.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.877161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:23:59.878954 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:23:59.888082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:23:59.897711 ignition[1093]: INFO : Ignition 2.22.0 Dec 16 12:23:59.897711 ignition[1093]: INFO : Stage: umount Dec 16 12:23:59.899508 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:59.899508 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:59.899508 ignition[1093]: INFO : umount: umount passed Dec 16 12:23:59.899508 ignition[1093]: INFO : Ignition finished successfully Dec 16 12:23:59.901813 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:23:59.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.902953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:23:59.904624 systemd[1]: Stopped target network.target - Network. Dec 16 12:23:59.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.906129 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:23:59.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.906209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:23:59.907937 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:23:59.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.908009 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:23:59.909694 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:23:59.909752 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:23:59.911466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:23:59.911522 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:23:59.913272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:23:59.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.915095 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:23:59.922372 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:23:59.926000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:23:59.922503 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:23:59.929856 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:23:59.930030 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:23:59.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.934381 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:23:59.935528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:23:59.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.938212 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:23:59.938000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:23:59.939297 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:23:59.939354 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:23:59.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.941481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:23:59.941555 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:23:59.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.944254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:23:59.945282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:23:59.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.945363 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:23:59.947421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:23:59.947489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:23:59.949549 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:23:59.949613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:23:59.951453 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:23:59.965687 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:23:59.965923 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:23:59.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.968100 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:23:59.968151 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:23:59.970604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:23:59.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.970682 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:23:59.972622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:23:59.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.972689 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:23:59.975500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:23:59.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.975565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:23:59.978436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:23:59.978507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:23:59.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.982616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:23:59.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.983722 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:23:59.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.983799 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:23:59.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.985638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:23:59.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:23:59.985709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:23:59.987787 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:23:59.987836 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:23:59.989895 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:23:59.989968 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:23:59.991915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:23:59.991974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:00.007762 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:24:00.007904 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:24:00.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:00.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:00.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:00.010591 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:24:00.010885 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:24:00.014588 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:24:00.016799 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:24:00.041819 systemd[1]: Switching root. Dec 16 12:24:00.083879 systemd-journald[345]: Journal stopped Dec 16 12:24:01.097458 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Dec 16 12:24:01.097524 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:24:01.097545 kernel: SELinux: policy capability open_perms=1 Dec 16 12:24:01.097560 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:24:01.097587 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:24:01.097602 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:24:01.097616 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:24:01.097626 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:24:01.097636 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:24:01.097646 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:24:01.097660 systemd[1]: Successfully loaded SELinux policy in 65.295ms. Dec 16 12:24:01.097674 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.646ms. Dec 16 12:24:01.097686 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:24:01.097698 systemd[1]: Detected virtualization kvm. Dec 16 12:24:01.097709 systemd[1]: Detected architecture arm64. Dec 16 12:24:01.097721 systemd[1]: Detected first boot. Dec 16 12:24:01.097732 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:24:01.097745 zram_generator::config[1139]: No configuration found. Dec 16 12:24:01.097761 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:24:01.097776 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:24:01.097788 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:24:01.097799 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:24:01.097810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:24:01.097823 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:24:01.097843 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:24:01.097855 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:24:01.097867 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:24:01.097878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:24:01.097889 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:24:01.097902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:24:01.097937 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:24:01.097948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:24:01.097960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:24:01.097972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:24:01.097982 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:24:01.097994 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:24:01.098007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:24:01.098020 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:24:01.098031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:24:01.098042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:24:01.098054 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:24:01.098065 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:24:01.098078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:24:01.098089 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:24:01.098101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:24:01.098112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:24:01.098123 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:24:01.098134 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:24:01.098145 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:24:01.098157 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:24:01.098168 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:24:01.098179 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:24:01.098190 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:24:01.098201 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:24:01.098212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:24:01.098223 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:24:01.098233 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:24:01.098245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:24:01.098256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:24:01.098268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:24:01.098279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:24:01.098290 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:24:01.098300 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:24:01.098311 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:24:01.098323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:24:01.098334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:24:01.098346 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:24:01.098358 systemd[1]: Reached target machines.target - Containers. Dec 16 12:24:01.098369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:24:01.098382 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:24:01.098396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:24:01.098407 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:24:01.098417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:24:01.098428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:24:01.098440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:24:01.098451 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:24:01.098466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:24:01.098479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:24:01.098491 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:24:01.098502 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:24:01.098512 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:24:01.098524 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:24:01.098536 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:24:01.098548 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:24:01.098560 kernel: fuse: init (API version 7.41) Dec 16 12:24:01.098576 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:24:01.098588 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:24:01.098599 kernel: ACPI: bus type drm_connector registered Dec 16 12:24:01.098611 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:24:01.098622 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:24:01.098632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:24:01.098645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:24:01.098656 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:24:01.098666 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:24:01.098677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:24:01.098689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:24:01.098722 systemd-journald[1217]: Collecting audit messages is enabled. Dec 16 12:24:01.098749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:24:01.098761 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:24:01.098772 systemd-journald[1217]: Journal started Dec 16 12:24:01.098795 systemd-journald[1217]: Runtime Journal (/run/log/journal/fa65b261b55e4c0e8e5171e90f91b1de) is 6M, max 48.5M, 42.4M free. Dec 16 12:24:00.916000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:24:01.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.039000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:24:01.039000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:24:01.040000 audit: BPF prog-id=15 op=LOAD Dec 16 12:24:01.042000 audit: BPF prog-id=16 op=LOAD Dec 16 12:24:01.042000 audit: BPF prog-id=17 op=LOAD Dec 16 12:24:01.095000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:24:01.095000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdaf10100 a2=4000 a3=0 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:01.095000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:24:00.794390 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:24:00.816183 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:24:00.816703 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:24:01.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.102930 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:24:01.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.105990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:24:01.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.107618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:24:01.107901 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:24:01.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.109533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:24:01.109794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:24:01.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.111461 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:24:01.111709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:24:01.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.113387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:24:01.113581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:24:01.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.115405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:24:01.115687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:24:01.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.117684 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:24:01.117937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:24:01.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.119600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:24:01.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.121708 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:24:01.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.124604 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:24:01.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.126515 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:24:01.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.141288 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:24:01.143418 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:24:01.146478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:24:01.148991 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:24:01.150230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:24:01.150286 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:24:01.152589 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:24:01.154163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:24:01.154295 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:24:01.168964 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:24:01.171404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:24:01.172532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:24:01.173833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:24:01.175010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:24:01.178064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:24:01.180481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:24:01.186234 systemd-journald[1217]: Time spent on flushing to /var/log/journal/fa65b261b55e4c0e8e5171e90f91b1de is 20.454ms for 1008 entries. Dec 16 12:24:01.186234 systemd-journald[1217]: System Journal (/var/log/journal/fa65b261b55e4c0e8e5171e90f91b1de) is 8M, max 163.5M, 155.5M free. Dec 16 12:24:01.239195 systemd-journald[1217]: Received client request to flush runtime journal. Dec 16 12:24:01.239265 kernel: loop1: detected capacity change from 0 to 211168 Dec 16 12:24:01.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.188013 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:24:01.191487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:24:01.194727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:24:01.197374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:24:01.203391 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:24:01.209827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:24:01.213790 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:24:01.230034 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 16 12:24:01.230045 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 16 12:24:01.233834 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:24:01.237308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:01.243268 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:24:01.244804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:24:01.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.272975 kernel: loop2: detected capacity change from 0 to 100192 Dec 16 12:24:01.319949 kernel: loop3: detected capacity change from 0 to 109872 Dec 16 12:24:01.320362 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:24:01.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.322000 audit: BPF prog-id=18 op=LOAD Dec 16 12:24:01.322000 audit: BPF prog-id=19 op=LOAD Dec 16 12:24:01.322000 audit: BPF prog-id=20 op=LOAD Dec 16 12:24:01.324411 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:24:01.326000 audit: BPF prog-id=21 op=LOAD Dec 16 12:24:01.331099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:24:01.333423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:24:01.337000 audit: BPF prog-id=22 op=LOAD Dec 16 12:24:01.346000 audit: BPF prog-id=23 op=LOAD Dec 16 12:24:01.346000 audit: BPF prog-id=24 op=LOAD Dec 16 12:24:01.348073 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:24:01.349000 audit: BPF prog-id=25 op=LOAD Dec 16 12:24:01.349000 audit: BPF prog-id=26 op=LOAD Dec 16 12:24:01.349000 audit: BPF prog-id=27 op=LOAD Dec 16 12:24:01.352110 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:24:01.356039 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:24:01.358748 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 16 12:24:01.359176 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 16 12:24:01.367659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:24:01.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.387255 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:24:01.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.395024 kernel: loop5: detected capacity change from 0 to 100192 Dec 16 12:24:01.394656 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:24:01.397064 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:24:01.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.404998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:24:01.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.409942 kernel: loop6: detected capacity change from 0 to 109872 Dec 16 12:24:01.423517 (sd-merge)[1286]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 12:24:01.426694 (sd-merge)[1286]: Merged extensions into '/usr'. Dec 16 12:24:01.430538 systemd[1]: Reload requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:24:01.430559 systemd[1]: Reloading... Dec 16 12:24:01.465257 systemd-oomd[1279]: No swap; memory pressure usage will be degraded Dec 16 12:24:01.473369 systemd-resolved[1281]: Positive Trust Anchors: Dec 16 12:24:01.473392 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:24:01.473395 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:24:01.473426 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:24:01.484471 systemd-resolved[1281]: Defaulting to hostname 'linux'. Dec 16 12:24:01.495945 zram_generator::config[1332]: No configuration found. Dec 16 12:24:01.643348 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:24:01.643583 systemd[1]: Reloading finished in 212 ms. Dec 16 12:24:01.674388 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:24:01.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.675684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:24:01.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.677019 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:24:01.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.680663 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:24:01.693271 systemd[1]: Starting ensure-sysext.service... Dec 16 12:24:01.695416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:24:01.696000 audit: BPF prog-id=28 op=LOAD Dec 16 12:24:01.696000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:24:01.696000 audit: BPF prog-id=29 op=LOAD Dec 16 12:24:01.696000 audit: BPF prog-id=30 op=LOAD Dec 16 12:24:01.696000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:24:01.696000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:24:01.697000 audit: BPF prog-id=31 op=LOAD Dec 16 12:24:01.697000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:24:01.697000 audit: BPF prog-id=32 op=LOAD Dec 16 12:24:01.697000 audit: BPF prog-id=33 op=LOAD Dec 16 12:24:01.697000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:24:01.697000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:24:01.698000 audit: BPF prog-id=34 op=LOAD Dec 16 12:24:01.698000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:24:01.699000 audit: BPF prog-id=35 op=LOAD Dec 16 12:24:01.699000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:24:01.699000 audit: BPF prog-id=36 op=LOAD Dec 16 12:24:01.699000 audit: BPF prog-id=37 op=LOAD Dec 16 12:24:01.699000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:24:01.699000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:24:01.699000 audit: BPF prog-id=38 op=LOAD Dec 16 12:24:01.699000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:24:01.700000 audit: BPF prog-id=39 op=LOAD Dec 16 12:24:01.700000 audit: BPF prog-id=40 op=LOAD Dec 16 12:24:01.700000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:24:01.700000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:24:01.704050 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:24:01.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.707000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:24:01.707000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:24:01.707000 audit: BPF prog-id=41 op=LOAD Dec 16 12:24:01.707000 audit: BPF prog-id=42 op=LOAD Dec 16 12:24:01.709893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:24:01.711276 systemd[1]: Reload requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:24:01.711291 systemd[1]: Reloading... Dec 16 12:24:01.714379 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:24:01.714413 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:24:01.714683 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:24:01.715686 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Dec 16 12:24:01.715742 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Dec 16 12:24:01.720045 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:24:01.720058 systemd-tmpfiles[1366]: Skipping /boot Dec 16 12:24:01.735641 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:24:01.735655 systemd-tmpfiles[1366]: Skipping /boot Dec 16 12:24:01.757456 systemd-udevd[1369]: Using default interface naming scheme 'v257'. Dec 16 12:24:01.781640 zram_generator::config[1396]: No configuration found. Dec 16 12:24:01.970480 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:24:01.970765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:24:01.972328 systemd[1]: Reloading finished in 260 ms. Dec 16 12:24:01.994974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:24:01.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:01.996000 audit: BPF prog-id=43 op=LOAD Dec 16 12:24:01.996000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:24:01.997000 audit: BPF prog-id=44 op=LOAD Dec 16 12:24:01.997000 audit: BPF prog-id=45 op=LOAD Dec 16 12:24:01.997000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:24:01.997000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:24:01.997000 audit: BPF prog-id=46 op=LOAD Dec 16 12:24:01.997000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:24:01.997000 audit: BPF prog-id=47 op=LOAD Dec 16 12:24:01.997000 audit: BPF prog-id=48 op=LOAD Dec 16 12:24:01.997000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:24:01.997000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:24:01.998000 audit: BPF prog-id=49 op=LOAD Dec 16 12:24:01.998000 audit: BPF prog-id=50 op=LOAD Dec 16 12:24:01.998000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:24:01.998000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:24:01.998000 audit: BPF prog-id=51 op=LOAD Dec 16 12:24:01.998000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:24:01.999000 audit: BPF prog-id=52 op=LOAD Dec 16 12:24:01.999000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:24:01.999000 audit: BPF prog-id=53 op=LOAD Dec 16 12:24:01.999000 audit: BPF prog-id=54 op=LOAD Dec 16 12:24:01.999000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:24:01.999000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:24:02.000000 audit: BPF prog-id=55 op=LOAD Dec 16 12:24:02.000000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:24:02.000000 audit: BPF prog-id=56 op=LOAD Dec 16 12:24:02.000000 audit: BPF prog-id=57 op=LOAD Dec 16 12:24:02.000000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:24:02.000000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:24:02.029029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:24:02.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.049974 systemd[1]: Finished ensure-sysext.service. Dec 16 12:24:02.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.065921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:24:02.068215 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:24:02.069455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:24:02.070691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:24:02.077060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:24:02.079796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:24:02.083408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:24:02.084686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:24:02.084806 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:24:02.086203 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:24:02.088520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:24:02.089843 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:24:02.091060 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:24:02.093000 audit: BPF prog-id=58 op=LOAD Dec 16 12:24:02.095227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:24:02.100000 audit: BPF prog-id=59 op=LOAD Dec 16 12:24:02.102259 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:24:02.105875 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:24:02.109431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:24:02.111662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:24:02.112969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:24:02.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.115531 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:24:02.115754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:24:02.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.121388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:24:02.120000 audit[1498]: SYSTEM_BOOT pid=1498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.121770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:24:02.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.124177 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:24:02.124376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:24:02.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.128863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:24:02.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:02.138980 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:24:02.141705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:24:02.141902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:24:02.146901 augenrules[1515]: No rules Dec 16 12:24:02.145000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:24:02.145000 audit[1515]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9caba60 a2=420 a3=0 items=0 ppid=1474 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:02.145000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:24:02.148803 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:24:02.151754 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:24:02.152125 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:24:02.160639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:24:02.162378 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:24:02.191409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:24:02.203183 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:24:02.203778 systemd-networkd[1489]: lo: Link UP Dec 16 12:24:02.204089 systemd-networkd[1489]: lo: Gained carrier Dec 16 12:24:02.205040 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:24:02.205596 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:02.205607 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:24:02.206341 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:24:02.206848 systemd-networkd[1489]: eth0: Link UP Dec 16 12:24:02.207152 systemd-networkd[1489]: eth0: Gained carrier Dec 16 12:24:02.207231 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:24:02.209259 systemd[1]: Reached target network.target - Network. Dec 16 12:24:02.211927 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:24:02.214470 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:24:02.228429 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:24:02.230576 systemd-timesyncd[1494]: Network configuration changed, trying to establish connection. Dec 16 12:24:02.233234 systemd-timesyncd[1494]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:24:02.233303 systemd-timesyncd[1494]: Initial clock synchronization to Tue 2025-12-16 12:24:02.521928 UTC. Dec 16 12:24:02.244987 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:24:02.449059 ldconfig[1481]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:24:02.455622 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:24:02.459287 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:24:02.481035 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:24:02.482312 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:24:02.483362 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:24:02.484443 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:24:02.485796 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:24:02.486873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:24:02.487897 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:24:02.489136 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:24:02.490099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:24:02.491111 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:24:02.491150 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:24:02.491870 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:24:02.493710 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:24:02.496177 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:24:02.499090 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:24:02.500524 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:24:02.501652 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:24:02.505008 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:24:02.506342 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:24:02.508162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:24:02.509346 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:24:02.510228 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:24:02.511072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:24:02.511112 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:24:02.512302 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:24:02.514426 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:24:02.516418 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:24:02.518770 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:24:02.521031 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:24:02.522064 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:24:02.523286 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:24:02.527007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:24:02.527697 jq[1546]: false Dec 16 12:24:02.531094 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:24:02.533674 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:24:02.536379 extend-filesystems[1547]: Found /dev/vda6 Dec 16 12:24:02.538575 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:24:02.541037 extend-filesystems[1547]: Found /dev/vda9 Dec 16 12:24:02.540967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:24:02.541499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:24:02.542368 extend-filesystems[1547]: Checking size of /dev/vda9 Dec 16 12:24:02.542272 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:24:02.544198 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:24:02.548953 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:24:02.550372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:24:02.552107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:24:02.555884 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:24:02.558432 jq[1563]: true Dec 16 12:24:02.556813 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:24:02.561111 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:24:02.561379 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:24:02.567148 extend-filesystems[1547]: Resized partition /dev/vda9 Dec 16 12:24:02.570026 extend-filesystems[1582]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:24:02.580090 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 12:24:02.585414 tar[1571]: linux-arm64/LICENSE Dec 16 12:24:02.640531 jq[1580]: true Dec 16 12:24:02.608366 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:24:02.640732 update_engine[1560]: I20251216 12:24:02.588527 1560 main.cc:92] Flatcar Update Engine starting Dec 16 12:24:02.640732 update_engine[1560]: I20251216 12:24:02.613602 1560 update_check_scheduler.cc:74] Next update check in 4m41s Dec 16 12:24:02.607214 dbus-daemon[1544]: [system] SELinux support is enabled Dec 16 12:24:02.621645 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:24:02.624431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:24:02.624494 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:24:02.626498 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:24:02.626518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:24:02.630983 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:24:02.642581 tar[1571]: linux-arm64/helm Dec 16 12:24:02.648494 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 12:24:02.687041 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:24:02.689161 systemd-logind[1558]: New seat seat0. Dec 16 12:24:02.690260 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:24:02.690260 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:24:02.690260 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 12:24:02.697624 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Dec 16 12:24:02.691614 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:24:02.693971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:24:02.697010 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:24:02.700606 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:24:02.702044 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:24:02.704765 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:24:02.707989 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:24:02.737235 containerd[1581]: time="2025-12-16T12:24:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:24:02.737901 containerd[1581]: time="2025-12-16T12:24:02.737845440Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:24:02.748585 containerd[1581]: time="2025-12-16T12:24:02.748520400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.52µs" Dec 16 12:24:02.748585 containerd[1581]: time="2025-12-16T12:24:02.748571880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:24:02.748728 containerd[1581]: time="2025-12-16T12:24:02.748628240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:24:02.748728 containerd[1581]: time="2025-12-16T12:24:02.748640640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:24:02.748980 containerd[1581]: time="2025-12-16T12:24:02.748792880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:24:02.748980 containerd[1581]: time="2025-12-16T12:24:02.748812320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:24:02.748980 containerd[1581]: time="2025-12-16T12:24:02.748861360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:24:02.748980 containerd[1581]: time="2025-12-16T12:24:02.748873320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749214 containerd[1581]: time="2025-12-16T12:24:02.749182800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749214 containerd[1581]: time="2025-12-16T12:24:02.749206880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749263 containerd[1581]: time="2025-12-16T12:24:02.749220160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749263 containerd[1581]: time="2025-12-16T12:24:02.749229280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749535 containerd[1581]: time="2025-12-16T12:24:02.749371400Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749535 containerd[1581]: time="2025-12-16T12:24:02.749396200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749535 containerd[1581]: time="2025-12-16T12:24:02.749469640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749664 containerd[1581]: time="2025-12-16T12:24:02.749642000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749694 containerd[1581]: time="2025-12-16T12:24:02.749674720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:24:02.749694 containerd[1581]: time="2025-12-16T12:24:02.749684720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:24:02.749737 containerd[1581]: time="2025-12-16T12:24:02.749715120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:24:02.750046 containerd[1581]: time="2025-12-16T12:24:02.749967000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:24:02.750099 containerd[1581]: time="2025-12-16T12:24:02.750050760Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:24:02.817104 containerd[1581]: time="2025-12-16T12:24:02.817036480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:24:02.817305 containerd[1581]: time="2025-12-16T12:24:02.817168600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:24:02.817305 containerd[1581]: time="2025-12-16T12:24:02.817276640Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:24:02.817305 containerd[1581]: time="2025-12-16T12:24:02.817297440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:24:02.817358 containerd[1581]: time="2025-12-16T12:24:02.817312600Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:24:02.817358 containerd[1581]: time="2025-12-16T12:24:02.817325080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:24:02.817358 containerd[1581]: time="2025-12-16T12:24:02.817337120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:24:02.817358 containerd[1581]: time="2025-12-16T12:24:02.817346920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:24:02.817427 containerd[1581]: time="2025-12-16T12:24:02.817367360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:24:02.817427 containerd[1581]: time="2025-12-16T12:24:02.817383280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:24:02.817427 containerd[1581]: time="2025-12-16T12:24:02.817396200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:24:02.817427 containerd[1581]: time="2025-12-16T12:24:02.817407720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:24:02.817427 containerd[1581]: time="2025-12-16T12:24:02.817417640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:24:02.817508 containerd[1581]: time="2025-12-16T12:24:02.817432360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817593400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817620760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817637320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817648640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817659120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817668560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817682160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817698560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817711120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817722080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817733000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817835600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817881640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817895920Z" level=info msg="Start snapshots syncer" Dec 16 12:24:02.819923 containerd[1581]: time="2025-12-16T12:24:02.817997120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:24:02.820242 containerd[1581]: time="2025-12-16T12:24:02.818387560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:24:02.820242 containerd[1581]: time="2025-12-16T12:24:02.818459720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818580040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818822360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818850480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818861720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818872000Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818884600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818896000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818974320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.818993400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.819006200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.819054600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.819070440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:24:02.820341 containerd[1581]: time="2025-12-16T12:24:02.819080080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819089640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819097880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819110240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819124280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819185200Z" level=info msg="runtime interface created" Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819194680Z" level=info msg="created NRI interface" Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819203840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819217840Z" level=info msg="Connect containerd service" Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.819239720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:24:02.820606 containerd[1581]: time="2025-12-16T12:24:02.820278080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:24:02.898837 containerd[1581]: time="2025-12-16T12:24:02.898750720Z" level=info msg="Start subscribing containerd event" Dec 16 12:24:02.898837 containerd[1581]: time="2025-12-16T12:24:02.898843840Z" level=info msg="Start recovering state" Dec 16 12:24:02.898989 containerd[1581]: time="2025-12-16T12:24:02.898972760Z" level=info msg="Start event monitor" Dec 16 12:24:02.899037 containerd[1581]: time="2025-12-16T12:24:02.898992600Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:24:02.899037 containerd[1581]: time="2025-12-16T12:24:02.899008080Z" level=info msg="Start streaming server" Dec 16 12:24:02.899074 containerd[1581]: time="2025-12-16T12:24:02.899050280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:24:02.899074 containerd[1581]: time="2025-12-16T12:24:02.899059720Z" level=info msg="runtime interface starting up..." Dec 16 12:24:02.899074 containerd[1581]: time="2025-12-16T12:24:02.899066200Z" level=info msg="starting plugins..." Dec 16 12:24:02.899122 containerd[1581]: time="2025-12-16T12:24:02.899084360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:24:02.899228 containerd[1581]: time="2025-12-16T12:24:02.899202160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:24:02.899282 containerd[1581]: time="2025-12-16T12:24:02.899269920Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:24:02.899338 containerd[1581]: time="2025-12-16T12:24:02.899327440Z" level=info msg="containerd successfully booted in 0.162636s" Dec 16 12:24:02.899554 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:24:02.969498 tar[1571]: linux-arm64/README.md Dec 16 12:24:02.989850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:24:03.220408 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:24:03.243271 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:24:03.246380 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:24:03.265115 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:24:03.265423 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:24:03.268333 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:24:03.302033 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:24:03.305148 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:24:03.307520 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:24:03.309003 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:24:03.441661 systemd-networkd[1489]: eth0: Gained IPv6LL Dec 16 12:24:03.444590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:24:03.446515 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:24:03.449206 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:24:03.451775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:03.472485 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:24:03.490823 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:24:03.491443 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:24:03.494213 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:24:03.496217 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:24:04.098682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:04.100344 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:24:04.103192 systemd[1]: Startup finished in 1.536s (kernel) + 5.210s (initrd) + 3.866s (userspace) = 10.613s. Dec 16 12:24:04.103696 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:24:04.504498 kubelet[1682]: E1216 12:24:04.504421 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:24:04.507523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:24:04.507658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:24:04.510081 systemd[1]: kubelet.service: Consumed 773ms CPU time, 258.2M memory peak. Dec 16 12:24:06.879424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:24:06.880908 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). Dec 16 12:24:06.997339 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:06.999568 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.008615 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:24:07.009749 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:24:07.016116 systemd-logind[1558]: New session 1 of user core. Dec 16 12:24:07.059407 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:24:07.063977 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:24:07.095905 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:24:07.102239 systemd-logind[1558]: New session c1 of user core. Dec 16 12:24:07.250433 systemd[1700]: Queued start job for default target default.target. Dec 16 12:24:07.268279 systemd[1700]: Created slice app.slice - User Application Slice. Dec 16 12:24:07.268322 systemd[1700]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:24:07.268336 systemd[1700]: Reached target paths.target - Paths. Dec 16 12:24:07.268394 systemd[1700]: Reached target timers.target - Timers. Dec 16 12:24:07.269858 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:24:07.270739 systemd[1700]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:24:07.283311 systemd[1700]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:24:07.285340 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:24:07.285530 systemd[1700]: Reached target sockets.target - Sockets. Dec 16 12:24:07.285644 systemd[1700]: Reached target basic.target - Basic System. Dec 16 12:24:07.285694 systemd[1700]: Reached target default.target - Main User Target. Dec 16 12:24:07.285724 systemd[1700]: Startup finished in 172ms. Dec 16 12:24:07.286010 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:24:07.288168 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:24:07.313333 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:39800.service - OpenSSH per-connection server daemon (10.0.0.1:39800). Dec 16 12:24:07.388929 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 39800 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:07.390344 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.396605 systemd-logind[1558]: New session 2 of user core. Dec 16 12:24:07.414208 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:24:07.432729 sshd[1716]: Connection closed by 10.0.0.1 port 39800 Dec 16 12:24:07.433214 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:07.442448 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:39800.service: Deactivated successfully. Dec 16 12:24:07.445627 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:24:07.447892 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:24:07.449273 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:39812.service - OpenSSH per-connection server daemon (10.0.0.1:39812). Dec 16 12:24:07.453209 systemd-logind[1558]: Removed session 2. Dec 16 12:24:07.513436 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 39812 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:07.515798 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.520743 systemd-logind[1558]: New session 3 of user core. Dec 16 12:24:07.528186 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:24:07.537834 sshd[1725]: Connection closed by 10.0.0.1 port 39812 Dec 16 12:24:07.538440 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:07.560837 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:39812.service: Deactivated successfully. Dec 16 12:24:07.562798 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:24:07.565019 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:24:07.568821 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:39814.service - OpenSSH per-connection server daemon (10.0.0.1:39814). Dec 16 12:24:07.569818 systemd-logind[1558]: Removed session 3. Dec 16 12:24:07.637556 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 39814 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:07.639028 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.644177 systemd-logind[1558]: New session 4 of user core. Dec 16 12:24:07.662191 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:24:07.684921 sshd[1734]: Connection closed by 10.0.0.1 port 39814 Dec 16 12:24:07.685610 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:07.696445 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:39814.service: Deactivated successfully. Dec 16 12:24:07.699837 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:24:07.700752 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:24:07.709381 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:39828.service - OpenSSH per-connection server daemon (10.0.0.1:39828). Dec 16 12:24:07.710847 systemd-logind[1558]: Removed session 4. Dec 16 12:24:07.767733 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 39828 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:07.771380 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.776691 systemd-logind[1558]: New session 5 of user core. Dec 16 12:24:07.786206 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:24:07.842513 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:24:07.842834 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:24:07.861321 sudo[1744]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:07.866073 sshd[1743]: Connection closed by 10.0.0.1 port 39828 Dec 16 12:24:07.865741 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:07.876553 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:39828.service: Deactivated successfully. Dec 16 12:24:07.878467 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:24:07.881438 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:24:07.885610 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:39836.service - OpenSSH per-connection server daemon (10.0.0.1:39836). Dec 16 12:24:07.886288 systemd-logind[1558]: Removed session 5. Dec 16 12:24:07.957657 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 39836 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:07.959233 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:07.965107 systemd-logind[1558]: New session 6 of user core. Dec 16 12:24:07.975189 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:24:07.988888 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:24:07.989220 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:24:07.997587 sudo[1756]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:08.005704 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:24:08.006072 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:24:08.018023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:24:08.059000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:24:08.061148 augenrules[1778]: No rules Dec 16 12:24:08.061425 kernel: kauditd_printk_skb: 183 callbacks suppressed Dec 16 12:24:08.061453 kernel: audit: type=1305 audit(1765887848.059:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:24:08.062820 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:24:08.059000 audit[1778]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd4ff3b20 a2=420 a3=0 items=0 ppid=1759 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:08.063315 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:24:08.065576 sudo[1755]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:08.066977 kernel: audit: type=1300 audit(1765887848.059:226): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd4ff3b20 a2=420 a3=0 items=0 ppid=1759 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:08.059000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:24:08.069158 kernel: audit: type=1327 audit(1765887848.059:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:24:08.069195 kernel: audit: type=1130 audit(1765887848.062:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.069274 sshd[1754]: Connection closed by 10.0.0.1 port 39836 Dec 16 12:24:08.069792 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:08.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.074783 kernel: audit: type=1131 audit(1765887848.062:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.074840 kernel: audit: type=1106 audit(1765887848.062:229): pid=1755 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.062000 audit[1755]: USER_END pid=1755 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.077672 kernel: audit: type=1104 audit(1765887848.062:230): pid=1755 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.062000 audit[1755]: CRED_DISP pid=1755 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.070000 audit[1750]: USER_END pid=1750 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.083830 kernel: audit: type=1106 audit(1765887848.070:231): pid=1750 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.083864 kernel: audit: type=1104 audit(1765887848.070:232): pid=1750 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.070000 audit[1750]: CRED_DISP pid=1750 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.091745 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:39836.service: Deactivated successfully. Dec 16 12:24:08.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.36:22-10.0.0.1:39836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.094142 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:24:08.094953 kernel: audit: type=1131 audit(1765887848.091:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.36:22-10.0.0.1:39836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.095742 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:24:08.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.36:22-10.0.0.1:39844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.098341 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:39844.service - OpenSSH per-connection server daemon (10.0.0.1:39844). Dec 16 12:24:08.099057 systemd-logind[1558]: Removed session 6. Dec 16 12:24:08.167000 audit[1787]: USER_ACCT pid=1787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.168597 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 39844 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:24:08.168000 audit[1787]: CRED_ACQ pid=1787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.168000 audit[1787]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa7c8b10 a2=3 a3=0 items=0 ppid=1 pid=1787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:08.168000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:24:08.170012 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:08.175473 systemd-logind[1558]: New session 7 of user core. Dec 16 12:24:08.192176 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:24:08.193000 audit[1787]: USER_START pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.195000 audit[1790]: CRED_ACQ pid=1790 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:08.204000 audit[1791]: USER_ACCT pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.206554 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:24:08.206000 audit[1791]: CRED_REFR pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.207380 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:24:08.208000 audit[1791]: USER_START pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:08.510968 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:24:08.527341 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:24:08.786075 dockerd[1811]: time="2025-12-16T12:24:08.785750800Z" level=info msg="Starting up" Dec 16 12:24:08.786931 dockerd[1811]: time="2025-12-16T12:24:08.786896671Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:24:08.802746 dockerd[1811]: time="2025-12-16T12:24:08.802652734Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:24:08.982812 dockerd[1811]: time="2025-12-16T12:24:08.982750140Z" level=info msg="Loading containers: start." Dec 16 12:24:08.995015 kernel: Initializing XFRM netlink socket Dec 16 12:24:09.040000 audit[1863]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.040000 audit[1863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd75878a0 a2=0 a3=0 items=0 ppid=1811 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:24:09.043000 audit[1865]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.043000 audit[1865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffced9cfb0 a2=0 a3=0 items=0 ppid=1811 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:24:09.045000 audit[1867]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.045000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc26fd900 a2=0 a3=0 items=0 ppid=1811 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:24:09.047000 audit[1869]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.047000 audit[1869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb526360 a2=0 a3=0 items=0 ppid=1811 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:24:09.049000 audit[1871]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.049000 audit[1871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa685370 a2=0 a3=0 items=0 ppid=1811 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:24:09.051000 audit[1873]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.051000 audit[1873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcd96db90 a2=0 a3=0 items=0 ppid=1811 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:24:09.053000 audit[1875]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.053000 audit[1875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0afa5e0 a2=0 a3=0 items=0 ppid=1811 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.053000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:24:09.055000 audit[1877]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.055000 audit[1877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffee328c70 a2=0 a3=0 items=0 ppid=1811 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:24:09.081000 audit[1880]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.081000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffed2317d0 a2=0 a3=0 items=0 ppid=1811 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:24:09.084000 audit[1882]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.084000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffed32a970 a2=0 a3=0 items=0 ppid=1811 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:24:09.087000 audit[1884]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.087000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdbc6ad40 a2=0 a3=0 items=0 ppid=1811 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.087000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:24:09.089000 audit[1886]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.089000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe96979d0 a2=0 a3=0 items=0 ppid=1811 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:24:09.091000 audit[1888]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.091000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc2f9d7f0 a2=0 a3=0 items=0 ppid=1811 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.091000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:24:09.130000 audit[1919]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.130000 audit[1919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc6472a50 a2=0 a3=0 items=0 ppid=1811 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:24:09.132000 audit[1921]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.132000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffec650180 a2=0 a3=0 items=0 ppid=1811 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:24:09.134000 audit[1923]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.134000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff36ed150 a2=0 a3=0 items=0 ppid=1811 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:24:09.137000 audit[1925]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.137000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffef10c0 a2=0 a3=0 items=0 ppid=1811 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:24:09.139000 audit[1927]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.139000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb58c4f0 a2=0 a3=0 items=0 ppid=1811 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:24:09.141000 audit[1929]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.141000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff74a5630 a2=0 a3=0 items=0 ppid=1811 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:24:09.143000 audit[1931]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.143000 audit[1931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc37ec7c0 a2=0 a3=0 items=0 ppid=1811 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:24:09.146000 audit[1933]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.146000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd07a47a0 a2=0 a3=0 items=0 ppid=1811 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:24:09.148000 audit[1935]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.148000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff6f0a6e0 a2=0 a3=0 items=0 ppid=1811 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:24:09.150000 audit[1937]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.150000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd6f28190 a2=0 a3=0 items=0 ppid=1811 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:24:09.152000 audit[1939]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.152000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe813c1b0 a2=0 a3=0 items=0 ppid=1811 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:24:09.155000 audit[1941]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.155000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe52241b0 a2=0 a3=0 items=0 ppid=1811 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:24:09.157000 audit[1943]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.157000 audit[1943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff2c333b0 a2=0 a3=0 items=0 ppid=1811 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:24:09.163000 audit[1948]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.163000 audit[1948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0b510a0 a2=0 a3=0 items=0 ppid=1811 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:24:09.166000 audit[1950]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.166000 audit[1950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcf95b080 a2=0 a3=0 items=0 ppid=1811 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:24:09.169000 audit[1952]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.169000 audit[1952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdbd8ca80 a2=0 a3=0 items=0 ppid=1811 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:24:09.172000 audit[1954]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.172000 audit[1954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc9481da0 a2=0 a3=0 items=0 ppid=1811 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:24:09.174000 audit[1956]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.174000 audit[1956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd797f1f0 a2=0 a3=0 items=0 ppid=1811 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:24:09.176000 audit[1958]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:09.176000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffeb8bfae0 a2=0 a3=0 items=0 ppid=1811 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:24:09.192000 audit[1963]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.192000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd920d750 a2=0 a3=0 items=0 ppid=1811 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:24:09.197000 audit[1966]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.197000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd03f1620 a2=0 a3=0 items=0 ppid=1811 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:24:09.205000 audit[1974]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.205000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffe12568c0 a2=0 a3=0 items=0 ppid=1811 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:24:09.215000 audit[1980]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.215000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe184bab0 a2=0 a3=0 items=0 ppid=1811 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:24:09.217000 audit[1982]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.217000 audit[1982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff96d7270 a2=0 a3=0 items=0 ppid=1811 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:24:09.221000 audit[1984]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.221000 audit[1984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffee9195d0 a2=0 a3=0 items=0 ppid=1811 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:24:09.223000 audit[1986]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.223000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe4daaef0 a2=0 a3=0 items=0 ppid=1811 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:24:09.225000 audit[1988]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:09.225000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd26b7f20 a2=0 a3=0 items=0 ppid=1811 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:09.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:24:09.227408 systemd-networkd[1489]: docker0: Link UP Dec 16 12:24:09.231867 dockerd[1811]: time="2025-12-16T12:24:09.231804965Z" level=info msg="Loading containers: done." Dec 16 12:24:09.245974 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1830239397-merged.mount: Deactivated successfully. Dec 16 12:24:09.250995 dockerd[1811]: time="2025-12-16T12:24:09.250938441Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:24:09.250995 dockerd[1811]: time="2025-12-16T12:24:09.251040305Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:24:09.251309 dockerd[1811]: time="2025-12-16T12:24:09.251205011Z" level=info msg="Initializing buildkit" Dec 16 12:24:09.281536 dockerd[1811]: time="2025-12-16T12:24:09.281479056Z" level=info msg="Completed buildkit initialization" Dec 16 12:24:09.288536 dockerd[1811]: time="2025-12-16T12:24:09.288453161Z" level=info msg="Daemon has completed initialization" Dec 16 12:24:09.288877 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:24:09.289234 dockerd[1811]: time="2025-12-16T12:24:09.288533603Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:24:09.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:09.855816 containerd[1581]: time="2025-12-16T12:24:09.855553822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:24:10.658209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209605658.mount: Deactivated successfully. Dec 16 12:24:11.813194 containerd[1581]: time="2025-12-16T12:24:11.813135671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:11.826480 containerd[1581]: time="2025-12-16T12:24:11.826412698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=26701968" Dec 16 12:24:11.835611 containerd[1581]: time="2025-12-16T12:24:11.835542676Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:11.841307 containerd[1581]: time="2025-12-16T12:24:11.841239756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:11.842521 containerd[1581]: time="2025-12-16T12:24:11.842245032Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.986639544s" Dec 16 12:24:11.842521 containerd[1581]: time="2025-12-16T12:24:11.842290064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:24:11.843698 containerd[1581]: time="2025-12-16T12:24:11.843664827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:24:12.953037 containerd[1581]: time="2025-12-16T12:24:12.952343944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:12.953037 containerd[1581]: time="2025-12-16T12:24:12.952959190Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 16 12:24:12.954126 containerd[1581]: time="2025-12-16T12:24:12.954064133Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:12.956779 containerd[1581]: time="2025-12-16T12:24:12.956731654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:12.957955 containerd[1581]: time="2025-12-16T12:24:12.957898744Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.114193882s" Dec 16 12:24:12.958033 containerd[1581]: time="2025-12-16T12:24:12.957960326Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:24:12.958453 containerd[1581]: time="2025-12-16T12:24:12.958427461Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:24:14.653490 containerd[1581]: time="2025-12-16T12:24:14.653423054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:14.657626 containerd[1581]: time="2025-12-16T12:24:14.657540668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 12:24:14.659926 containerd[1581]: time="2025-12-16T12:24:14.659879220Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:14.666493 containerd[1581]: time="2025-12-16T12:24:14.666421495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:14.667537 containerd[1581]: time="2025-12-16T12:24:14.667483326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.709024189s" Dec 16 12:24:14.667537 containerd[1581]: time="2025-12-16T12:24:14.667529022Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:24:14.668157 containerd[1581]: time="2025-12-16T12:24:14.668003008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:24:14.751061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:24:14.753028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:14.898353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:14.900291 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:24:14.900405 kernel: audit: type=1130 audit(1765887854.897:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:14.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:14.912273 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:24:14.957747 kubelet[2107]: E1216 12:24:14.957685 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:24:14.962138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:24:14.962274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:24:14.964884 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.9M memory peak. Dec 16 12:24:14.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:24:14.970980 kernel: audit: type=1131 audit(1765887854.961:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:24:15.832000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346274215.mount: Deactivated successfully. Dec 16 12:24:16.108325 containerd[1581]: time="2025-12-16T12:24:16.108153426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:16.109826 containerd[1581]: time="2025-12-16T12:24:16.109756480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Dec 16 12:24:16.111089 containerd[1581]: time="2025-12-16T12:24:16.111048246Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:16.115200 containerd[1581]: time="2025-12-16T12:24:16.115126468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:16.115800 containerd[1581]: time="2025-12-16T12:24:16.115750211Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.447710039s" Dec 16 12:24:16.115800 containerd[1581]: time="2025-12-16T12:24:16.115791069Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:24:16.116549 containerd[1581]: time="2025-12-16T12:24:16.116515207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:24:16.775787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641345273.mount: Deactivated successfully. Dec 16 12:24:17.682227 containerd[1581]: time="2025-12-16T12:24:17.682024055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:17.684856 containerd[1581]: time="2025-12-16T12:24:17.684785703Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 16 12:24:17.686129 containerd[1581]: time="2025-12-16T12:24:17.686091170Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:17.690419 containerd[1581]: time="2025-12-16T12:24:17.690342264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:17.692149 containerd[1581]: time="2025-12-16T12:24:17.692099954Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.575546957s" Dec 16 12:24:17.692149 containerd[1581]: time="2025-12-16T12:24:17.692142027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:24:17.692691 containerd[1581]: time="2025-12-16T12:24:17.692653458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:24:18.140749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865038441.mount: Deactivated successfully. Dec 16 12:24:18.149943 containerd[1581]: time="2025-12-16T12:24:18.149597081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:18.156107 containerd[1581]: time="2025-12-16T12:24:18.156030144Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:24:18.157886 containerd[1581]: time="2025-12-16T12:24:18.157848759Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:18.170836 containerd[1581]: time="2025-12-16T12:24:18.170755522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:18.171418 containerd[1581]: time="2025-12-16T12:24:18.171391324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.702999ms" Dec 16 12:24:18.171489 containerd[1581]: time="2025-12-16T12:24:18.171423520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:24:18.172427 containerd[1581]: time="2025-12-16T12:24:18.172364883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:24:18.769346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985639317.mount: Deactivated successfully. Dec 16 12:24:20.325768 containerd[1581]: time="2025-12-16T12:24:20.325665165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:20.328377 containerd[1581]: time="2025-12-16T12:24:20.328006786Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 16 12:24:20.329390 containerd[1581]: time="2025-12-16T12:24:20.329353951Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:20.332734 containerd[1581]: time="2025-12-16T12:24:20.332681354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:20.333947 containerd[1581]: time="2025-12-16T12:24:20.333799629Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.161305169s" Dec 16 12:24:20.333947 containerd[1581]: time="2025-12-16T12:24:20.333843150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:24:25.001545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:24:25.003077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:25.170306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:25.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.174955 kernel: audit: type=1130 audit(1765887865.170:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.188304 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:24:25.240281 kubelet[2267]: E1216 12:24:25.240215 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:24:25.244826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:24:25.245178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:24:25.245745 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.7M memory peak. Dec 16 12:24:25.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:24:25.252933 kernel: audit: type=1131 audit(1765887865.244:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:24:25.599118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:25.599274 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.7M memory peak. Dec 16 12:24:25.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.601873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:25.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.604870 kernel: audit: type=1130 audit(1765887865.598:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.604991 kernel: audit: type=1131 audit(1765887865.598:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:25.626890 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-7.scope)... Dec 16 12:24:25.626916 systemd[1]: Reloading... Dec 16 12:24:25.710963 zram_generator::config[2327]: No configuration found. Dec 16 12:24:25.918085 systemd[1]: Reloading finished in 290 ms. Dec 16 12:24:25.952254 kernel: audit: type=1334 audit(1765887865.949:290): prog-id=63 op=LOAD Dec 16 12:24:25.952366 kernel: audit: type=1334 audit(1765887865.949:291): prog-id=43 op=UNLOAD Dec 16 12:24:25.949000 audit: BPF prog-id=63 op=LOAD Dec 16 12:24:25.949000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:24:25.950000 audit: BPF prog-id=64 op=LOAD Dec 16 12:24:25.953305 kernel: audit: type=1334 audit(1765887865.950:292): prog-id=64 op=LOAD Dec 16 12:24:25.951000 audit: BPF prog-id=65 op=LOAD Dec 16 12:24:25.954122 kernel: audit: type=1334 audit(1765887865.951:293): prog-id=65 op=LOAD Dec 16 12:24:25.951000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:24:25.954970 kernel: audit: type=1334 audit(1765887865.951:294): prog-id=44 op=UNLOAD Dec 16 12:24:25.951000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:24:25.952000 audit: BPF prog-id=66 op=LOAD Dec 16 12:24:25.955927 kernel: audit: type=1334 audit(1765887865.951:295): prog-id=45 op=UNLOAD Dec 16 12:24:25.967000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:24:25.968000 audit: BPF prog-id=67 op=LOAD Dec 16 12:24:25.968000 audit: BPF prog-id=68 op=LOAD Dec 16 12:24:25.968000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:24:25.968000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:24:25.968000 audit: BPF prog-id=69 op=LOAD Dec 16 12:24:25.969000 audit: BPF prog-id=70 op=LOAD Dec 16 12:24:25.969000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:24:25.969000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:24:25.970000 audit: BPF prog-id=71 op=LOAD Dec 16 12:24:25.970000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:24:25.970000 audit: BPF prog-id=72 op=LOAD Dec 16 12:24:25.970000 audit: BPF prog-id=73 op=LOAD Dec 16 12:24:25.970000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:24:25.970000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:24:25.971000 audit: BPF prog-id=74 op=LOAD Dec 16 12:24:25.971000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:24:25.971000 audit: BPF prog-id=75 op=LOAD Dec 16 12:24:25.971000 audit: BPF prog-id=76 op=LOAD Dec 16 12:24:25.971000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:24:25.971000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:24:25.972000 audit: BPF prog-id=77 op=LOAD Dec 16 12:24:25.973000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:24:25.973000 audit: BPF prog-id=78 op=LOAD Dec 16 12:24:25.973000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:24:25.974000 audit: BPF prog-id=79 op=LOAD Dec 16 12:24:25.974000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:24:25.976000 audit: BPF prog-id=80 op=LOAD Dec 16 12:24:25.976000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:24:25.976000 audit: BPF prog-id=81 op=LOAD Dec 16 12:24:25.976000 audit: BPF prog-id=82 op=LOAD Dec 16 12:24:25.976000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:24:25.976000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:24:25.989315 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:24:25.989410 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:24:25.989769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:25.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:24:25.991775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:26.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:26.153024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:26.158835 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:24:26.202128 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:26.202128 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:24:26.202128 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:26.202128 kubelet[2370]: I1216 12:24:26.201344 2370 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:24:26.845950 kubelet[2370]: I1216 12:24:26.845827 2370 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:24:26.845950 kubelet[2370]: I1216 12:24:26.845860 2370 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:24:26.846210 kubelet[2370]: I1216 12:24:26.846175 2370 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:24:26.894545 kubelet[2370]: E1216 12:24:26.894264 2370 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:24:26.902322 kubelet[2370]: I1216 12:24:26.902143 2370 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:24:26.910098 kubelet[2370]: I1216 12:24:26.910067 2370 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:24:26.912751 kubelet[2370]: I1216 12:24:26.912730 2370 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:24:26.913193 kubelet[2370]: I1216 12:24:26.913161 2370 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:24:26.913350 kubelet[2370]: I1216 12:24:26.913195 2370 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:24:26.913438 kubelet[2370]: I1216 12:24:26.913416 2370 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:24:26.913438 kubelet[2370]: I1216 12:24:26.913425 2370 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:24:26.913691 kubelet[2370]: I1216 12:24:26.913677 2370 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:26.916246 kubelet[2370]: I1216 12:24:26.916223 2370 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:24:26.916314 kubelet[2370]: I1216 12:24:26.916253 2370 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:24:26.916314 kubelet[2370]: I1216 12:24:26.916275 2370 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:24:26.917370 kubelet[2370]: I1216 12:24:26.917292 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:24:26.918768 kubelet[2370]: I1216 12:24:26.918622 2370 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:24:26.919381 kubelet[2370]: I1216 12:24:26.919356 2370 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:24:26.919526 kubelet[2370]: W1216 12:24:26.919511 2370 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:24:26.920893 kubelet[2370]: E1216 12:24:26.920841 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:24:26.922054 kubelet[2370]: E1216 12:24:26.921946 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:24:26.922165 kubelet[2370]: I1216 12:24:26.922143 2370 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:24:26.922208 kubelet[2370]: I1216 12:24:26.922198 2370 server.go:1289] "Started kubelet" Dec 16 12:24:26.926371 kubelet[2370]: I1216 12:24:26.923870 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:24:26.926371 kubelet[2370]: I1216 12:24:26.925253 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:24:26.926371 kubelet[2370]: I1216 12:24:26.925659 2370 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:24:26.926371 kubelet[2370]: I1216 12:24:26.925708 2370 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:24:26.929050 kubelet[2370]: I1216 12:24:26.928982 2370 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:24:26.930513 kubelet[2370]: E1216 12:24:26.927887 2370 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b1a8ad1158c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:24:26.922162368 +0000 UTC m=+0.759673170,LastTimestamp:2025-12-16 12:24:26.922162368 +0000 UTC m=+0.759673170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:24:26.930513 kubelet[2370]: I1216 12:24:26.930009 2370 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:24:26.931000 audit[2387]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.931000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff9164ad0 a2=0 a3=0 items=0 ppid=2370 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.933975 2370 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:24:26.936306 kubelet[2370]: E1216 12:24:26.934233 2370 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:24:26.936306 kubelet[2370]: E1216 12:24:26.934255 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:26.936306 kubelet[2370]: E1216 12:24:26.934675 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.934695 2370 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.934762 2370 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.934981 2370 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.935138 2370 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:24:26.936306 kubelet[2370]: E1216 12:24:26.935182 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:24:26.936306 kubelet[2370]: I1216 12:24:26.936275 2370 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:24:26.936000 audit[2388]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.936000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0c62f30 a2=0 a3=0 items=0 ppid=2370 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:24:26.939000 audit[2390]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.939000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd7c87460 a2=0 a3=0 items=0 ppid=2370 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:24:26.941000 audit[2393]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.941000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd22f36a0 a2=0 a3=0 items=0 ppid=2370 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:24:26.949990 kubelet[2370]: I1216 12:24:26.949827 2370 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:24:26.949990 kubelet[2370]: I1216 12:24:26.949944 2370 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:24:26.949990 kubelet[2370]: I1216 12:24:26.949968 2370 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:26.950000 audit[2399]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.950000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe88039b0 a2=0 a3=0 items=0 ppid=2370 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:24:26.952568 kubelet[2370]: I1216 12:24:26.952487 2370 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:24:26.952000 audit[2402]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:26.952000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc7006d60 a2=0 a3=0 items=0 ppid=2370 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:24:26.954090 kubelet[2370]: I1216 12:24:26.954051 2370 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:24:26.954131 kubelet[2370]: I1216 12:24:26.954099 2370 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:24:26.954131 kubelet[2370]: I1216 12:24:26.954124 2370 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:24:26.954175 kubelet[2370]: I1216 12:24:26.954132 2370 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:24:26.953000 audit[2401]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.953000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffce67c10 a2=0 a3=0 items=0 ppid=2370 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:24:26.954398 kubelet[2370]: E1216 12:24:26.954327 2370 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:24:26.955018 kubelet[2370]: E1216 12:24:26.954978 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:24:26.954000 audit[2403]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:26.954000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4ca7350 a2=0 a3=0 items=0 ppid=2370 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:24:26.955000 audit[2404]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.955000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0a39f60 a2=0 a3=0 items=0 ppid=2370 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:24:26.956000 audit[2405]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:26.956000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff61099c0 a2=0 a3=0 items=0 ppid=2370 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:24:26.957000 audit[2406]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:26.957000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff19b4580 a2=0 a3=0 items=0 ppid=2370 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:24:26.958000 audit[2407]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:26.958000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2ea85f0 a2=0 a3=0 items=0 ppid=2370 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:26.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:24:27.034827 kubelet[2370]: E1216 12:24:27.034763 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:27.038781 kubelet[2370]: I1216 12:24:27.038736 2370 policy_none.go:49] "None policy: Start" Dec 16 12:24:27.038781 kubelet[2370]: I1216 12:24:27.038775 2370 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:24:27.038867 kubelet[2370]: I1216 12:24:27.038788 2370 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:24:27.045123 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:24:27.055045 kubelet[2370]: E1216 12:24:27.054978 2370 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:24:27.065167 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:24:27.088599 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:24:27.090961 kubelet[2370]: E1216 12:24:27.090898 2370 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:24:27.091394 kubelet[2370]: I1216 12:24:27.091369 2370 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:24:27.091450 kubelet[2370]: I1216 12:24:27.091389 2370 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:24:27.091742 kubelet[2370]: I1216 12:24:27.091707 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:24:27.092865 kubelet[2370]: E1216 12:24:27.092839 2370 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:24:27.093026 kubelet[2370]: E1216 12:24:27.092885 2370 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:24:27.136485 kubelet[2370]: E1216 12:24:27.135568 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Dec 16 12:24:27.192816 kubelet[2370]: I1216 12:24:27.192697 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:27.193234 kubelet[2370]: E1216 12:24:27.193202 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 16 12:24:27.271654 systemd[1]: Created slice kubepods-burstable-pod4a8bdd97a9d35ee0f8ae27bdf77686b1.slice - libcontainer container kubepods-burstable-pod4a8bdd97a9d35ee0f8ae27bdf77686b1.slice. Dec 16 12:24:27.289145 kubelet[2370]: E1216 12:24:27.289093 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:27.291476 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:24:27.293823 kubelet[2370]: E1216 12:24:27.293764 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:27.296289 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:24:27.298359 kubelet[2370]: E1216 12:24:27.298303 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:27.335588 kubelet[2370]: I1216 12:24:27.335504 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:27.335588 kubelet[2370]: I1216 12:24:27.335548 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:27.335588 kubelet[2370]: I1216 12:24:27.335570 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:27.335588 kubelet[2370]: I1216 12:24:27.335586 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:27.335588 kubelet[2370]: I1216 12:24:27.335603 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:27.335830 kubelet[2370]: I1216 12:24:27.335621 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:27.335830 kubelet[2370]: I1216 12:24:27.335650 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:27.335830 kubelet[2370]: I1216 12:24:27.335668 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:27.335830 kubelet[2370]: I1216 12:24:27.335681 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:27.397928 kubelet[2370]: I1216 12:24:27.395005 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:27.397928 kubelet[2370]: E1216 12:24:27.395428 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 16 12:24:27.537549 kubelet[2370]: E1216 12:24:27.537478 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Dec 16 12:24:27.590485 kubelet[2370]: E1216 12:24:27.590435 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.591190 containerd[1581]: time="2025-12-16T12:24:27.591131508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a8bdd97a9d35ee0f8ae27bdf77686b1,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.594667 kubelet[2370]: E1216 12:24:27.594622 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.595213 containerd[1581]: time="2025-12-16T12:24:27.595163925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.600267 kubelet[2370]: E1216 12:24:27.599953 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.600627 containerd[1581]: time="2025-12-16T12:24:27.600572794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.797175 kubelet[2370]: I1216 12:24:27.797143 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:27.797823 kubelet[2370]: E1216 12:24:27.797783 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Dec 16 12:24:27.858699 containerd[1581]: time="2025-12-16T12:24:27.858650109Z" level=info msg="connecting to shim f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1" address="unix:///run/containerd/s/879a5215e4a2d962251f9f054de3079f60ce6706fb482be4b6417ceaf92f201f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.859595 containerd[1581]: time="2025-12-16T12:24:27.859562003Z" level=info msg="connecting to shim dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad" address="unix:///run/containerd/s/769cf805164610f3d586d93216318f8070ffe564560c13b3c034fbb4acb4222a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.864400 containerd[1581]: time="2025-12-16T12:24:27.864354291Z" level=info msg="connecting to shim 1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b" address="unix:///run/containerd/s/eb5b3d264feedad27d0b00af7cc55468525befec34129acc47a9f8ca092eccce" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.893361 systemd[1]: Started cri-containerd-1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b.scope - libcontainer container 1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b. Dec 16 12:24:27.895466 systemd[1]: Started cri-containerd-dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad.scope - libcontainer container dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad. Dec 16 12:24:27.901777 systemd[1]: Started cri-containerd-f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1.scope - libcontainer container f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1. Dec 16 12:24:27.909000 audit: BPF prog-id=83 op=LOAD Dec 16 12:24:27.909000 audit: BPF prog-id=84 op=LOAD Dec 16 12:24:27.909000 audit[2459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=85 op=LOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=86 op=LOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.911000 audit: BPF prog-id=87 op=LOAD Dec 16 12:24:27.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2433 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461636239313733326531333362383531336261303162616631353531 Dec 16 12:24:27.912000 audit: BPF prog-id=88 op=LOAD Dec 16 12:24:27.912000 audit: BPF prog-id=89 op=LOAD Dec 16 12:24:27.912000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.912000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:24:27.912000 audit[2467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.913000 audit: BPF prog-id=90 op=LOAD Dec 16 12:24:27.913000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.913000 audit: BPF prog-id=91 op=LOAD Dec 16 12:24:27.913000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.913000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:24:27.913000 audit[2467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.913000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:24:27.913000 audit[2467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.913000 audit: BPF prog-id=92 op=LOAD Dec 16 12:24:27.913000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2445 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135313262396332653435323035666366663731663061306238386534 Dec 16 12:24:27.919000 audit: BPF prog-id=93 op=LOAD Dec 16 12:24:27.919000 audit: BPF prog-id=94 op=LOAD Dec 16 12:24:27.919000 audit[2487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.919000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:24:27.919000 audit[2487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.920000 audit: BPF prog-id=95 op=LOAD Dec 16 12:24:27.920000 audit[2487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.920000 audit: BPF prog-id=96 op=LOAD Dec 16 12:24:27.920000 audit[2487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.921000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:24:27.921000 audit[2487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.921000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:24:27.921000 audit[2487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.921000 audit: BPF prog-id=97 op=LOAD Dec 16 12:24:27.921000 audit[2487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:27.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637653061326330643331386165666364643264366631373737346665 Dec 16 12:24:27.943939 containerd[1581]: time="2025-12-16T12:24:27.943877981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad\"" Dec 16 12:24:27.945331 kubelet[2370]: E1216 12:24:27.945240 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.951584 containerd[1581]: time="2025-12-16T12:24:27.951542469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b\"" Dec 16 12:24:27.952610 kubelet[2370]: E1216 12:24:27.952491 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.960062 containerd[1581]: time="2025-12-16T12:24:27.959969993Z" level=info msg="CreateContainer within sandbox \"dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:24:27.962682 containerd[1581]: time="2025-12-16T12:24:27.962315503Z" level=info msg="CreateContainer within sandbox \"1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:24:27.962802 kubelet[2370]: E1216 12:24:27.962575 2370 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b1a8ad1158c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:24:26.922162368 +0000 UTC m=+0.759673170,LastTimestamp:2025-12-16 12:24:26.922162368 +0000 UTC m=+0.759673170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:24:27.963341 containerd[1581]: time="2025-12-16T12:24:27.963313642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a8bdd97a9d35ee0f8ae27bdf77686b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1\"" Dec 16 12:24:27.964230 kubelet[2370]: E1216 12:24:27.964193 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:27.970579 containerd[1581]: time="2025-12-16T12:24:27.970530355Z" level=info msg="CreateContainer within sandbox \"f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:24:27.972870 containerd[1581]: time="2025-12-16T12:24:27.972822988Z" level=info msg="Container 7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:27.983707 containerd[1581]: time="2025-12-16T12:24:27.983565537Z" level=info msg="Container 18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:27.986164 containerd[1581]: time="2025-12-16T12:24:27.986124479Z" level=info msg="Container 36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:27.989715 containerd[1581]: time="2025-12-16T12:24:27.989523329Z" level=info msg="CreateContainer within sandbox \"dacb91732e133b8513ba01baf15515ebfab4610232ef1229d05a371ce8354dad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e\"" Dec 16 12:24:27.990357 containerd[1581]: time="2025-12-16T12:24:27.990326544Z" level=info msg="StartContainer for \"7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e\"" Dec 16 12:24:27.991867 containerd[1581]: time="2025-12-16T12:24:27.991837554Z" level=info msg="connecting to shim 7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e" address="unix:///run/containerd/s/769cf805164610f3d586d93216318f8070ffe564560c13b3c034fbb4acb4222a" protocol=ttrpc version=3 Dec 16 12:24:27.991954 containerd[1581]: time="2025-12-16T12:24:27.991850973Z" level=info msg="CreateContainer within sandbox \"1512b9c2e45205fcff71f0a0b88e4f4a5bedc60f5f87bb3173273db49cbcfc4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6\"" Dec 16 12:24:27.992623 containerd[1581]: time="2025-12-16T12:24:27.992580520Z" level=info msg="StartContainer for \"18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6\"" Dec 16 12:24:27.993721 containerd[1581]: time="2025-12-16T12:24:27.993686698Z" level=info msg="connecting to shim 18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6" address="unix:///run/containerd/s/eb5b3d264feedad27d0b00af7cc55468525befec34129acc47a9f8ca092eccce" protocol=ttrpc version=3 Dec 16 12:24:27.998779 containerd[1581]: time="2025-12-16T12:24:27.998654963Z" level=info msg="CreateContainer within sandbox \"f7e0a2c0d318aefcdd2d6f17774fe266d965262bcf0297b59c2d8e9fbca4b5a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e\"" Dec 16 12:24:28.000677 containerd[1581]: time="2025-12-16T12:24:27.999433982Z" level=info msg="StartContainer for \"36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e\"" Dec 16 12:24:28.000677 containerd[1581]: time="2025-12-16T12:24:28.000565397Z" level=info msg="connecting to shim 36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e" address="unix:///run/containerd/s/879a5215e4a2d962251f9f054de3079f60ce6706fb482be4b6417ceaf92f201f" protocol=ttrpc version=3 Dec 16 12:24:28.017188 systemd[1]: Started cri-containerd-18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6.scope - libcontainer container 18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6. Dec 16 12:24:28.018398 systemd[1]: Started cri-containerd-7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e.scope - libcontainer container 7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e. Dec 16 12:24:28.021823 systemd[1]: Started cri-containerd-36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e.scope - libcontainer container 36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e. Dec 16 12:24:28.031000 audit: BPF prog-id=98 op=LOAD Dec 16 12:24:28.032000 audit: BPF prog-id=99 op=LOAD Dec 16 12:24:28.032000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=100 op=LOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=101 op=LOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.033000 audit: BPF prog-id=102 op=LOAD Dec 16 12:24:28.033000 audit: BPF prog-id=103 op=LOAD Dec 16 12:24:28.033000 audit[2550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2445 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646238343036323031623632653462613236633831326537306462 Dec 16 12:24:28.034000 audit: BPF prog-id=104 op=LOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=105 op=LOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=106 op=LOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.034000 audit: BPF prog-id=107 op=LOAD Dec 16 12:24:28.034000 audit[2549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2433 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766343664386538383139393037306666623465383637313436313065 Dec 16 12:24:28.041000 audit: BPF prog-id=108 op=LOAD Dec 16 12:24:28.043000 audit: BPF prog-id=109 op=LOAD Dec 16 12:24:28.043000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.043000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:24:28.043000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.043000 audit: BPF prog-id=110 op=LOAD Dec 16 12:24:28.043000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.044000 audit: BPF prog-id=111 op=LOAD Dec 16 12:24:28.044000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.044000 audit: BPF prog-id=111 op=UNLOAD Dec 16 12:24:28.044000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.044000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:24:28.044000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.044000 audit: BPF prog-id=112 op=LOAD Dec 16 12:24:28.044000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2432 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:28.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613766666531323230353264646233383665666462346166623632 Dec 16 12:24:28.072618 kubelet[2370]: E1216 12:24:28.072513 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:24:28.077604 containerd[1581]: time="2025-12-16T12:24:28.077435596Z" level=info msg="StartContainer for \"18db8406201b62e4ba26c812e70db4101cad6ba696387e72c86b40293cea9de6\" returns successfully" Dec 16 12:24:28.078724 containerd[1581]: time="2025-12-16T12:24:28.078513015Z" level=info msg="StartContainer for \"7f46d8e88199070ffb4e86714610e8c2ce765cb5de5da0bed1bdbe9613dd529e\" returns successfully" Dec 16 12:24:28.097734 containerd[1581]: time="2025-12-16T12:24:28.097688235Z" level=info msg="StartContainer for \"36a7ffe122052ddb386efdb4afb62663efa7a1ad5bf30f2794bac424b63f220e\" returns successfully" Dec 16 12:24:28.125086 kubelet[2370]: E1216 12:24:28.125030 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:24:28.184159 kubelet[2370]: E1216 12:24:28.184114 2370 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:24:28.600230 kubelet[2370]: I1216 12:24:28.600190 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:28.965233 kubelet[2370]: E1216 12:24:28.965106 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:28.965915 kubelet[2370]: E1216 12:24:28.965884 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:28.968383 kubelet[2370]: E1216 12:24:28.968356 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:28.968536 kubelet[2370]: E1216 12:24:28.968488 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:28.971104 kubelet[2370]: E1216 12:24:28.971075 2370 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:28.971342 kubelet[2370]: E1216 12:24:28.971322 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:29.846954 kubelet[2370]: E1216 12:24:29.845805 2370 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:24:29.910432 kubelet[2370]: I1216 12:24:29.910385 2370 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:24:29.922373 kubelet[2370]: I1216 12:24:29.922336 2370 apiserver.go:52] "Watching apiserver" Dec 16 12:24:29.935044 kubelet[2370]: I1216 12:24:29.934997 2370 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:24:29.935044 kubelet[2370]: I1216 12:24:29.935012 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:29.945067 kubelet[2370]: E1216 12:24:29.945030 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:29.945067 kubelet[2370]: I1216 12:24:29.945064 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:29.948979 kubelet[2370]: E1216 12:24:29.948814 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:29.948979 kubelet[2370]: I1216 12:24:29.948846 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:29.953237 kubelet[2370]: E1216 12:24:29.953043 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:29.973952 kubelet[2370]: I1216 12:24:29.971972 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:29.974523 kubelet[2370]: I1216 12:24:29.972095 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:29.974523 kubelet[2370]: I1216 12:24:29.972248 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:29.980410 kubelet[2370]: E1216 12:24:29.980362 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:29.980546 kubelet[2370]: E1216 12:24:29.980360 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:29.980546 kubelet[2370]: E1216 12:24:29.980539 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:29.980694 kubelet[2370]: E1216 12:24:29.980623 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:29.980929 kubelet[2370]: E1216 12:24:29.980893 2370 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:29.981075 kubelet[2370]: E1216 12:24:29.981052 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:30.973609 kubelet[2370]: I1216 12:24:30.973381 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:30.973609 kubelet[2370]: I1216 12:24:30.973551 2370 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:30.981652 kubelet[2370]: E1216 12:24:30.980997 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:30.981652 kubelet[2370]: E1216 12:24:30.981322 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:31.975152 kubelet[2370]: E1216 12:24:31.975084 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:31.975703 kubelet[2370]: E1216 12:24:31.975289 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:32.121141 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-7.scope)... Dec 16 12:24:32.121157 systemd[1]: Reloading... Dec 16 12:24:32.204950 zram_generator::config[2708]: No configuration found. Dec 16 12:24:32.394384 systemd[1]: Reloading finished in 272 ms. Dec 16 12:24:32.424023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:32.441349 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:24:32.442962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:32.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:32.443060 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 128.2M memory peak. Dec 16 12:24:32.445823 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 12:24:32.445934 kernel: audit: type=1131 audit(1765887872.442:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:32.445882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:32.445000 audit: BPF prog-id=113 op=LOAD Dec 16 12:24:32.446000 audit: BPF prog-id=114 op=LOAD Dec 16 12:24:32.447970 kernel: audit: type=1334 audit(1765887872.445:393): prog-id=113 op=LOAD Dec 16 12:24:32.448013 kernel: audit: type=1334 audit(1765887872.446:394): prog-id=114 op=LOAD Dec 16 12:24:32.448032 kernel: audit: type=1334 audit(1765887872.446:395): prog-id=69 op=UNLOAD Dec 16 12:24:32.446000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:24:32.446000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:24:32.449360 kernel: audit: type=1334 audit(1765887872.446:396): prog-id=70 op=UNLOAD Dec 16 12:24:32.449415 kernel: audit: type=1334 audit(1765887872.447:397): prog-id=115 op=LOAD Dec 16 12:24:32.447000 audit: BPF prog-id=115 op=LOAD Dec 16 12:24:32.447000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:24:32.448000 audit: BPF prog-id=116 op=LOAD Dec 16 12:24:32.451621 kernel: audit: type=1334 audit(1765887872.447:398): prog-id=77 op=UNLOAD Dec 16 12:24:32.451701 kernel: audit: type=1334 audit(1765887872.448:399): prog-id=116 op=LOAD Dec 16 12:24:32.451731 kernel: audit: type=1334 audit(1765887872.448:400): prog-id=74 op=UNLOAD Dec 16 12:24:32.448000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:24:32.449000 audit: BPF prog-id=117 op=LOAD Dec 16 12:24:32.453187 kernel: audit: type=1334 audit(1765887872.449:401): prog-id=117 op=LOAD Dec 16 12:24:32.450000 audit: BPF prog-id=118 op=LOAD Dec 16 12:24:32.450000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:24:32.450000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:24:32.451000 audit: BPF prog-id=119 op=LOAD Dec 16 12:24:32.469000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:24:32.469000 audit: BPF prog-id=120 op=LOAD Dec 16 12:24:32.469000 audit: BPF prog-id=121 op=LOAD Dec 16 12:24:32.469000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:24:32.469000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:24:32.470000 audit: BPF prog-id=122 op=LOAD Dec 16 12:24:32.470000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:24:32.470000 audit: BPF prog-id=123 op=LOAD Dec 16 12:24:32.470000 audit: BPF prog-id=124 op=LOAD Dec 16 12:24:32.470000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:24:32.470000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:24:32.471000 audit: BPF prog-id=125 op=LOAD Dec 16 12:24:32.471000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:24:32.472000 audit: BPF prog-id=126 op=LOAD Dec 16 12:24:32.472000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:24:32.472000 audit: BPF prog-id=127 op=LOAD Dec 16 12:24:32.472000 audit: BPF prog-id=128 op=LOAD Dec 16 12:24:32.472000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:24:32.472000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:24:32.473000 audit: BPF prog-id=129 op=LOAD Dec 16 12:24:32.473000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:24:32.476000 audit: BPF prog-id=130 op=LOAD Dec 16 12:24:32.476000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:24:32.476000 audit: BPF prog-id=131 op=LOAD Dec 16 12:24:32.476000 audit: BPF prog-id=132 op=LOAD Dec 16 12:24:32.476000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:24:32.476000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:24:32.608056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:32.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:32.612603 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:24:32.656439 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:32.656439 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:24:32.656439 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:32.656439 kubelet[2747]: I1216 12:24:32.656402 2747 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:24:32.663474 kubelet[2747]: I1216 12:24:32.663420 2747 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:24:32.663474 kubelet[2747]: I1216 12:24:32.663459 2747 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:24:32.663744 kubelet[2747]: I1216 12:24:32.663709 2747 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:24:32.665550 kubelet[2747]: I1216 12:24:32.665507 2747 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:24:32.668252 kubelet[2747]: I1216 12:24:32.668209 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:24:32.678323 kubelet[2747]: I1216 12:24:32.678292 2747 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:24:32.681953 kubelet[2747]: I1216 12:24:32.681670 2747 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:24:32.681953 kubelet[2747]: I1216 12:24:32.681864 2747 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:24:32.682277 kubelet[2747]: I1216 12:24:32.681890 2747 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:24:32.682428 kubelet[2747]: I1216 12:24:32.682412 2747 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:24:32.682503 kubelet[2747]: I1216 12:24:32.682493 2747 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:24:32.682617 kubelet[2747]: I1216 12:24:32.682604 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:32.682867 kubelet[2747]: I1216 12:24:32.682851 2747 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:24:32.683021 kubelet[2747]: I1216 12:24:32.683005 2747 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:24:32.683116 kubelet[2747]: I1216 12:24:32.683104 2747 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:24:32.683181 kubelet[2747]: I1216 12:24:32.683172 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:24:32.686835 kubelet[2747]: I1216 12:24:32.686799 2747 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:24:32.687617 kubelet[2747]: I1216 12:24:32.687524 2747 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:24:32.694779 kubelet[2747]: I1216 12:24:32.694059 2747 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:24:32.694779 kubelet[2747]: I1216 12:24:32.694117 2747 server.go:1289] "Started kubelet" Dec 16 12:24:32.694779 kubelet[2747]: I1216 12:24:32.694725 2747 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:24:32.696875 kubelet[2747]: I1216 12:24:32.695768 2747 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:24:32.696875 kubelet[2747]: I1216 12:24:32.696497 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:24:32.696875 kubelet[2747]: I1216 12:24:32.696843 2747 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:24:32.699457 kubelet[2747]: I1216 12:24:32.698656 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:24:32.703227 kubelet[2747]: I1216 12:24:32.702841 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:24:32.703227 kubelet[2747]: I1216 12:24:32.703119 2747 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:24:32.703373 kubelet[2747]: E1216 12:24:32.703341 2747 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:32.703601 kubelet[2747]: I1216 12:24:32.703573 2747 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:24:32.703821 kubelet[2747]: I1216 12:24:32.703793 2747 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:24:32.706714 kubelet[2747]: I1216 12:24:32.706682 2747 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:24:32.708142 kubelet[2747]: I1216 12:24:32.708064 2747 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:24:32.714002 kubelet[2747]: I1216 12:24:32.713161 2747 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:24:32.726071 kubelet[2747]: I1216 12:24:32.726023 2747 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:24:32.728043 kubelet[2747]: I1216 12:24:32.727972 2747 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:24:32.728043 kubelet[2747]: I1216 12:24:32.728010 2747 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:24:32.728043 kubelet[2747]: I1216 12:24:32.728037 2747 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:24:32.728043 kubelet[2747]: I1216 12:24:32.728045 2747 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:24:32.728254 kubelet[2747]: E1216 12:24:32.728093 2747 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:24:32.757860 kubelet[2747]: I1216 12:24:32.757803 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:24:32.757860 kubelet[2747]: I1216 12:24:32.757827 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:24:32.757860 kubelet[2747]: I1216 12:24:32.757852 2747 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:32.758101 kubelet[2747]: I1216 12:24:32.758079 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:24:32.758128 kubelet[2747]: I1216 12:24:32.758099 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:24:32.758128 kubelet[2747]: I1216 12:24:32.758124 2747 policy_none.go:49] "None policy: Start" Dec 16 12:24:32.758182 kubelet[2747]: I1216 12:24:32.758134 2747 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:24:32.758182 kubelet[2747]: I1216 12:24:32.758146 2747 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:24:32.758249 kubelet[2747]: I1216 12:24:32.758238 2747 state_mem.go:75] "Updated machine memory state" Dec 16 12:24:32.765719 kubelet[2747]: E1216 12:24:32.765682 2747 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:24:32.765962 kubelet[2747]: I1216 12:24:32.765892 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:24:32.765962 kubelet[2747]: I1216 12:24:32.765933 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:24:32.766568 kubelet[2747]: I1216 12:24:32.766475 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:24:32.770248 kubelet[2747]: E1216 12:24:32.770205 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:24:32.829350 kubelet[2747]: I1216 12:24:32.829264 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:32.829350 kubelet[2747]: I1216 12:24:32.829294 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:32.829350 kubelet[2747]: I1216 12:24:32.829359 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.836829 kubelet[2747]: E1216 12:24:32.836591 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:32.837420 kubelet[2747]: E1216 12:24:32.837320 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.869780 kubelet[2747]: I1216 12:24:32.869751 2747 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:32.890973 kubelet[2747]: I1216 12:24:32.890762 2747 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:24:32.890973 kubelet[2747]: I1216 12:24:32.890855 2747 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:24:32.904560 kubelet[2747]: I1216 12:24:32.904521 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.904763 kubelet[2747]: I1216 12:24:32.904746 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:32.904895 kubelet[2747]: I1216 12:24:32.904880 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.905015 kubelet[2747]: I1216 12:24:32.905001 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.905096 kubelet[2747]: I1216 12:24:32.905084 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:32.905185 kubelet[2747]: I1216 12:24:32.905172 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:32.905280 kubelet[2747]: I1216 12:24:32.905257 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a8bdd97a9d35ee0f8ae27bdf77686b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a8bdd97a9d35ee0f8ae27bdf77686b1\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:32.905353 kubelet[2747]: I1216 12:24:32.905342 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:32.905427 kubelet[2747]: I1216 12:24:32.905414 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:33.137543 kubelet[2747]: E1216 12:24:33.137005 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.137543 kubelet[2747]: E1216 12:24:33.137385 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.137543 kubelet[2747]: E1216 12:24:33.137492 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.685462 kubelet[2747]: I1216 12:24:33.685132 2747 apiserver.go:52] "Watching apiserver" Dec 16 12:24:33.704055 kubelet[2747]: I1216 12:24:33.704007 2747 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:24:33.740713 kubelet[2747]: I1216 12:24:33.740540 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:33.741595 kubelet[2747]: E1216 12:24:33.741222 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.741595 kubelet[2747]: E1216 12:24:33.741511 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.751070 kubelet[2747]: E1216 12:24:33.751014 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:33.751444 kubelet[2747]: E1216 12:24:33.751426 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:33.782094 kubelet[2747]: I1216 12:24:33.781884 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.781862035 podStartE2EDuration="1.781862035s" podCreationTimestamp="2025-12-16 12:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:33.770020257 +0000 UTC m=+1.152752520" watchObservedRunningTime="2025-12-16 12:24:33.781862035 +0000 UTC m=+1.164594298" Dec 16 12:24:33.783134 kubelet[2747]: I1216 12:24:33.783069 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.783052697 podStartE2EDuration="3.783052697s" podCreationTimestamp="2025-12-16 12:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:33.781777619 +0000 UTC m=+1.164509882" watchObservedRunningTime="2025-12-16 12:24:33.783052697 +0000 UTC m=+1.165784960" Dec 16 12:24:33.792379 kubelet[2747]: I1216 12:24:33.792303 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.792286202 podStartE2EDuration="3.792286202s" podCreationTimestamp="2025-12-16 12:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:33.791269974 +0000 UTC m=+1.174002237" watchObservedRunningTime="2025-12-16 12:24:33.792286202 +0000 UTC m=+1.175018425" Dec 16 12:24:34.742466 kubelet[2747]: E1216 12:24:34.742348 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:34.742836 kubelet[2747]: E1216 12:24:34.742472 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:35.744940 kubelet[2747]: E1216 12:24:35.744382 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:35.745427 kubelet[2747]: E1216 12:24:35.745411 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:38.485797 kubelet[2747]: I1216 12:24:38.485761 2747 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:24:38.486453 containerd[1581]: time="2025-12-16T12:24:38.486382805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:24:38.486849 kubelet[2747]: I1216 12:24:38.486624 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:24:39.500232 systemd[1]: Created slice kubepods-besteffort-pod3f95ce07_5f8b_4d1d_b048_234dc22a8ff5.slice - libcontainer container kubepods-besteffort-pod3f95ce07_5f8b_4d1d_b048_234dc22a8ff5.slice. Dec 16 12:24:39.547298 kubelet[2747]: I1216 12:24:39.547214 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f95ce07-5f8b-4d1d-b048-234dc22a8ff5-kube-proxy\") pod \"kube-proxy-5652m\" (UID: \"3f95ce07-5f8b-4d1d-b048-234dc22a8ff5\") " pod="kube-system/kube-proxy-5652m" Dec 16 12:24:39.547298 kubelet[2747]: I1216 12:24:39.547294 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f95ce07-5f8b-4d1d-b048-234dc22a8ff5-xtables-lock\") pod \"kube-proxy-5652m\" (UID: \"3f95ce07-5f8b-4d1d-b048-234dc22a8ff5\") " pod="kube-system/kube-proxy-5652m" Dec 16 12:24:39.547699 kubelet[2747]: I1216 12:24:39.547325 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f95ce07-5f8b-4d1d-b048-234dc22a8ff5-lib-modules\") pod \"kube-proxy-5652m\" (UID: \"3f95ce07-5f8b-4d1d-b048-234dc22a8ff5\") " pod="kube-system/kube-proxy-5652m" Dec 16 12:24:39.547699 kubelet[2747]: I1216 12:24:39.547379 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfkt\" (UniqueName: \"kubernetes.io/projected/3f95ce07-5f8b-4d1d-b048-234dc22a8ff5-kube-api-access-gkfkt\") pod \"kube-proxy-5652m\" (UID: \"3f95ce07-5f8b-4d1d-b048-234dc22a8ff5\") " pod="kube-system/kube-proxy-5652m" Dec 16 12:24:39.688881 systemd[1]: Created slice kubepods-besteffort-pod8a5c8fd1_8510_4901_859e_d74d5261c098.slice - libcontainer container kubepods-besteffort-pod8a5c8fd1_8510_4901_859e_d74d5261c098.slice. Dec 16 12:24:39.748710 kubelet[2747]: I1216 12:24:39.748634 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a5c8fd1-8510-4901-859e-d74d5261c098-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mqkkv\" (UID: \"8a5c8fd1-8510-4901-859e-d74d5261c098\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqkkv" Dec 16 12:24:39.748710 kubelet[2747]: I1216 12:24:39.748699 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnzp\" (UniqueName: \"kubernetes.io/projected/8a5c8fd1-8510-4901-859e-d74d5261c098-kube-api-access-hmnzp\") pod \"tigera-operator-7dcd859c48-mqkkv\" (UID: \"8a5c8fd1-8510-4901-859e-d74d5261c098\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqkkv" Dec 16 12:24:39.816488 kubelet[2747]: E1216 12:24:39.816404 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:39.820466 containerd[1581]: time="2025-12-16T12:24:39.820402836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5652m,Uid:3f95ce07-5f8b-4d1d-b048-234dc22a8ff5,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:39.850950 containerd[1581]: time="2025-12-16T12:24:39.850886737Z" level=info msg="connecting to shim ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2" address="unix:///run/containerd/s/54f2d6d62cc5746e7dce38f11524212e583e28d4d9251df69ac8c17b17506c17" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:39.898286 systemd[1]: Started cri-containerd-ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2.scope - libcontainer container ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2. Dec 16 12:24:39.908000 audit: BPF prog-id=133 op=LOAD Dec 16 12:24:39.911687 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:24:39.911820 kernel: audit: type=1334 audit(1765887879.908:434): prog-id=133 op=LOAD Dec 16 12:24:39.911000 audit: BPF prog-id=134 op=LOAD Dec 16 12:24:39.911000 audit[2825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.917965 kernel: audit: type=1334 audit(1765887879.911:435): prog-id=134 op=LOAD Dec 16 12:24:39.918128 kernel: audit: type=1300 audit(1765887879.911:435): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.918162 kernel: audit: type=1327 audit(1765887879.911:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.911000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:24:39.924571 kernel: audit: type=1334 audit(1765887879.911:436): prog-id=134 op=UNLOAD Dec 16 12:24:39.924635 kernel: audit: type=1300 audit(1765887879.911:436): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.911000 audit[2825]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.935996 kernel: audit: type=1327 audit(1765887879.911:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.936133 kernel: audit: type=1334 audit(1765887879.911:437): prog-id=135 op=LOAD Dec 16 12:24:39.911000 audit: BPF prog-id=135 op=LOAD Dec 16 12:24:39.937173 kernel: audit: type=1300 audit(1765887879.911:437): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.911000 audit[2825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.949503 kernel: audit: type=1327 audit(1765887879.911:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.912000 audit: BPF prog-id=136 op=LOAD Dec 16 12:24:39.912000 audit[2825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.917000 audit: BPF prog-id=136 op=UNLOAD Dec 16 12:24:39.917000 audit[2825]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.917000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:24:39.917000 audit[2825]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.917000 audit: BPF prog-id=137 op=LOAD Dec 16 12:24:39.917000 audit[2825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:39.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363626536373163623437616163363930366465333435323934313661 Dec 16 12:24:39.963988 containerd[1581]: time="2025-12-16T12:24:39.963902533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5652m,Uid:3f95ce07-5f8b-4d1d-b048-234dc22a8ff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2\"" Dec 16 12:24:39.964960 kubelet[2747]: E1216 12:24:39.964851 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:39.970691 containerd[1581]: time="2025-12-16T12:24:39.970651292Z" level=info msg="CreateContainer within sandbox \"ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:24:39.994166 containerd[1581]: time="2025-12-16T12:24:39.986155809Z" level=info msg="Container bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:39.997259 containerd[1581]: time="2025-12-16T12:24:39.997219733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqkkv,Uid:8a5c8fd1-8510-4901-859e-d74d5261c098,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:24:40.005796 containerd[1581]: time="2025-12-16T12:24:40.005742977Z" level=info msg="CreateContainer within sandbox \"ccbe671cb47aac6906de34529416a635b37eb02ee8b61723eec941f0fc865ad2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10\"" Dec 16 12:24:40.006478 containerd[1581]: time="2025-12-16T12:24:40.006425377Z" level=info msg="StartContainer for \"bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10\"" Dec 16 12:24:40.008506 containerd[1581]: time="2025-12-16T12:24:40.008148285Z" level=info msg="connecting to shim bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10" address="unix:///run/containerd/s/54f2d6d62cc5746e7dce38f11524212e583e28d4d9251df69ac8c17b17506c17" protocol=ttrpc version=3 Dec 16 12:24:40.032067 containerd[1581]: time="2025-12-16T12:24:40.031483145Z" level=info msg="connecting to shim 6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad" address="unix:///run/containerd/s/6f6760a65011695003aba1df00ab41b015eb435fc348143d5ea8a13c298cd886" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:40.037941 systemd[1]: Started cri-containerd-bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10.scope - libcontainer container bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10. Dec 16 12:24:40.066334 systemd[1]: Started cri-containerd-6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad.scope - libcontainer container 6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad. Dec 16 12:24:40.077000 audit: BPF prog-id=138 op=LOAD Dec 16 12:24:40.078000 audit: BPF prog-id=139 op=LOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=139 op=UNLOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=140 op=LOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=141 op=LOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=140 op=UNLOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.078000 audit: BPF prog-id=142 op=LOAD Dec 16 12:24:40.078000 audit[2882]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2870 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663613363353565633666303662663862656164356562633566383663 Dec 16 12:24:40.103000 audit: BPF prog-id=143 op=LOAD Dec 16 12:24:40.103000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264306336653738373137613739366165663761623633353866353237 Dec 16 12:24:40.103000 audit: BPF prog-id=144 op=LOAD Dec 16 12:24:40.103000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264306336653738373137613739366165663761623633353866353237 Dec 16 12:24:40.103000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:24:40.103000 audit[2850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264306336653738373137613739366165663761623633353866353237 Dec 16 12:24:40.103000 audit: BPF prog-id=143 op=UNLOAD Dec 16 12:24:40.103000 audit[2850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264306336653738373137613739366165663761623633353866353237 Dec 16 12:24:40.103000 audit: BPF prog-id=145 op=LOAD Dec 16 12:24:40.103000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264306336653738373137613739366165663761623633353866353237 Dec 16 12:24:40.113371 containerd[1581]: time="2025-12-16T12:24:40.113262451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqkkv,Uid:8a5c8fd1-8510-4901-859e-d74d5261c098,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad\"" Dec 16 12:24:40.115601 containerd[1581]: time="2025-12-16T12:24:40.115408342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:24:40.131855 containerd[1581]: time="2025-12-16T12:24:40.131815751Z" level=info msg="StartContainer for \"bd0c6e78717a796aef7ab6358f527afb4fcdf6e13e76deb641ddd12700da6e10\" returns successfully" Dec 16 12:24:40.308000 audit[2962]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.308000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe612b4e0 a2=0 a3=1 items=0 ppid=2883 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:24:40.310000 audit[2964]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.310000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee947140 a2=0 a3=1 items=0 ppid=2883 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:24:40.311000 audit[2963]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.311000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc237ebf0 a2=0 a3=1 items=0 ppid=2883 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:24:40.312000 audit[2966]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.312000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbf1a190 a2=0 a3=1 items=0 ppid=2883 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:24:40.312000 audit[2967]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.312000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7a631d0 a2=0 a3=1 items=0 ppid=2883 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.312000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:24:40.315000 audit[2970]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.315000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3492320 a2=0 a3=1 items=0 ppid=2883 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:24:40.412000 audit[2971]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.412000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff54a7b60 a2=0 a3=1 items=0 ppid=2883 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:24:40.415000 audit[2973]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.415000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2f28490 a2=0 a3=1 items=0 ppid=2883 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:24:40.420000 audit[2976]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.420000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd8400630 a2=0 a3=1 items=0 ppid=2883 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:24:40.422000 audit[2977]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.422000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff68a0090 a2=0 a3=1 items=0 ppid=2883 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:24:40.426000 audit[2979]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.426000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd3a08a70 a2=0 a3=1 items=0 ppid=2883 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:24:40.427000 audit[2980]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.427000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4e9b660 a2=0 a3=1 items=0 ppid=2883 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:24:40.431000 audit[2982]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.431000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffff0cc120 a2=0 a3=1 items=0 ppid=2883 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:24:40.435000 audit[2985]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.435000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3b3dd10 a2=0 a3=1 items=0 ppid=2883 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:24:40.437000 audit[2986]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.437000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8699500 a2=0 a3=1 items=0 ppid=2883 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:24:40.440000 audit[2988]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.440000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff04376c0 a2=0 a3=1 items=0 ppid=2883 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:24:40.442000 audit[2989]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.442000 audit[2989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6223080 a2=0 a3=1 items=0 ppid=2883 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:24:40.446000 audit[2991]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.446000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefc5be80 a2=0 a3=1 items=0 ppid=2883 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:24:40.451000 audit[2994]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.451000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcfe3ab20 a2=0 a3=1 items=0 ppid=2883 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:24:40.456000 audit[2997]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.456000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe34c6a80 a2=0 a3=1 items=0 ppid=2883 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:24:40.458000 audit[2998]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.458000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd89e8bc0 a2=0 a3=1 items=0 ppid=2883 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:24:40.461000 audit[3000]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.461000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd2595bc0 a2=0 a3=1 items=0 ppid=2883 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:24:40.466000 audit[3003]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.466000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff433eb60 a2=0 a3=1 items=0 ppid=2883 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:24:40.468000 audit[3004]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.468000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0d5ac20 a2=0 a3=1 items=0 ppid=2883 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:24:40.471000 audit[3006]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:24:40.471000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc51c5130 a2=0 a3=1 items=0 ppid=2883 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:24:40.501000 audit[3012]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:40.501000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd29b4300 a2=0 a3=1 items=0 ppid=2883 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:40.519000 audit[3012]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:40.519000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd29b4300 a2=0 a3=1 items=0 ppid=2883 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:40.521000 audit[3017]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.521000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffda087650 a2=0 a3=1 items=0 ppid=2883 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:24:40.525000 audit[3019]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.525000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff3b6fb50 a2=0 a3=1 items=0 ppid=2883 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:24:40.530000 audit[3022]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.530000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd13a7130 a2=0 a3=1 items=0 ppid=2883 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:24:40.531000 audit[3023]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.531000 audit[3023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce5d7bb0 a2=0 a3=1 items=0 ppid=2883 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:24:40.536000 audit[3025]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.536000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffceb2d520 a2=0 a3=1 items=0 ppid=2883 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:24:40.538000 audit[3026]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.538000 audit[3026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd40d4110 a2=0 a3=1 items=0 ppid=2883 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:24:40.542000 audit[3028]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.542000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe6a78500 a2=0 a3=1 items=0 ppid=2883 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:24:40.550000 audit[3031]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.550000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe37aac30 a2=0 a3=1 items=0 ppid=2883 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:24:40.552000 audit[3032]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.552000 audit[3032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc598f950 a2=0 a3=1 items=0 ppid=2883 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:24:40.556000 audit[3034]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.556000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6448110 a2=0 a3=1 items=0 ppid=2883 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:24:40.558000 audit[3035]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.558000 audit[3035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff6bbe480 a2=0 a3=1 items=0 ppid=2883 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:24:40.562000 audit[3037]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.562000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb6d9bc0 a2=0 a3=1 items=0 ppid=2883 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:24:40.568000 audit[3040]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.568000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe69c7270 a2=0 a3=1 items=0 ppid=2883 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:24:40.573000 audit[3043]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.573000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffffb5f790 a2=0 a3=1 items=0 ppid=2883 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:24:40.575000 audit[3044]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.575000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8eec990 a2=0 a3=1 items=0 ppid=2883 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:24:40.580000 audit[3046]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.580000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcd6cd1c0 a2=0 a3=1 items=0 ppid=2883 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:24:40.587000 audit[3049]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.587000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd71cad30 a2=0 a3=1 items=0 ppid=2883 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:24:40.588000 audit[3050]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.588000 audit[3050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9c03380 a2=0 a3=1 items=0 ppid=2883 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:24:40.592000 audit[3052]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.592000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff3e66e00 a2=0 a3=1 items=0 ppid=2883 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:24:40.593000 audit[3053]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.593000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9b010c0 a2=0 a3=1 items=0 ppid=2883 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:24:40.597000 audit[3055]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.597000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc30d1ea0 a2=0 a3=1 items=0 ppid=2883 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:24:40.603000 audit[3058]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:24:40.603000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff3b5e400 a2=0 a3=1 items=0 ppid=2883 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:24:40.609000 audit[3060]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:24:40.609000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffeb0d3130 a2=0 a3=1 items=0 ppid=2883 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.609000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:40.610000 audit[3060]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:24:40.610000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffeb0d3130 a2=0 a3=1 items=0 ppid=2883 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:40.610000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:40.759266 kubelet[2747]: E1216 12:24:40.758692 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:41.114109 kubelet[2747]: E1216 12:24:41.113977 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:41.138333 kubelet[2747]: I1216 12:24:41.137028 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5652m" podStartSLOduration=2.137007689 podStartE2EDuration="2.137007689s" podCreationTimestamp="2025-12-16 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:40.777226829 +0000 UTC m=+8.159959092" watchObservedRunningTime="2025-12-16 12:24:41.137007689 +0000 UTC m=+8.519740072" Dec 16 12:24:41.664184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185931982.mount: Deactivated successfully. Dec 16 12:24:41.759637 kubelet[2747]: E1216 12:24:41.759596 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:44.110827 containerd[1581]: time="2025-12-16T12:24:44.110755615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:24:44.114972 containerd[1581]: time="2025-12-16T12:24:44.114867842Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.999412836s" Dec 16 12:24:44.114972 containerd[1581]: time="2025-12-16T12:24:44.114920104Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:24:44.119978 containerd[1581]: time="2025-12-16T12:24:44.119928792Z" level=info msg="CreateContainer within sandbox \"6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:24:44.132092 containerd[1581]: time="2025-12-16T12:24:44.131580423Z" level=info msg="Container 03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:44.133348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451713022.mount: Deactivated successfully. Dec 16 12:24:44.137877 containerd[1581]: time="2025-12-16T12:24:44.137819874Z" level=info msg="CreateContainer within sandbox \"6ca3c55ec6f06bf8bead5ebc5f86c5d4cbb802bfcf98cbf95d05cece8dd162ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47\"" Dec 16 12:24:44.139404 containerd[1581]: time="2025-12-16T12:24:44.138627857Z" level=info msg="StartContainer for \"03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47\"" Dec 16 12:24:44.139404 containerd[1581]: time="2025-12-16T12:24:44.138869560Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:44.139779 containerd[1581]: time="2025-12-16T12:24:44.139719521Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:44.140444 containerd[1581]: time="2025-12-16T12:24:44.140395808Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:44.140849 containerd[1581]: time="2025-12-16T12:24:44.140794377Z" level=info msg="connecting to shim 03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47" address="unix:///run/containerd/s/6f6760a65011695003aba1df00ab41b015eb435fc348143d5ea8a13c298cd886" protocol=ttrpc version=3 Dec 16 12:24:44.186170 systemd[1]: Started cri-containerd-03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47.scope - libcontainer container 03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47. Dec 16 12:24:44.196000 audit: BPF prog-id=146 op=LOAD Dec 16 12:24:44.197000 audit: BPF prog-id=147 op=LOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=148 op=LOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=149 op=LOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=149 op=UNLOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.197000 audit: BPF prog-id=150 op=LOAD Dec 16 12:24:44.197000 audit[3071]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2870 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:44.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656164316239303130666430663339666434343165306465353534 Dec 16 12:24:44.215788 containerd[1581]: time="2025-12-16T12:24:44.215749144Z" level=info msg="StartContainer for \"03ead1b9010fd0f39fd441e0de554994644dd7d8697a9aa905d9b3fe3ae4fd47\" returns successfully" Dec 16 12:24:44.232216 kubelet[2747]: E1216 12:24:44.232056 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:44.704487 kubelet[2747]: E1216 12:24:44.704365 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:44.768270 kubelet[2747]: E1216 12:24:44.768169 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:47.490673 update_engine[1560]: I20251216 12:24:47.490006 1560 update_attempter.cc:509] Updating boot flags... Dec 16 12:24:50.272332 sudo[1791]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:50.275632 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:24:50.275677 kernel: audit: type=1106 audit(1765887890.271:514): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.271000 audit[1791]: USER_END pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.271000 audit[1791]: CRED_DISP pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.277555 sshd[1790]: Connection closed by 10.0.0.1 port 39844 Dec 16 12:24:50.278841 kernel: audit: type=1104 audit(1765887890.271:515): pid=1791 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.279749 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:50.280000 audit[1787]: USER_END pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:50.280000 audit[1787]: CRED_DISP pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:50.285477 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:24:50.287082 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:39844.service: Deactivated successfully. Dec 16 12:24:50.288359 kernel: audit: type=1106 audit(1765887890.280:516): pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:50.288421 kernel: audit: type=1104 audit(1765887890.280:517): pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:24:50.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.36:22-10.0.0.1:39844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.291594 kernel: audit: type=1131 audit(1765887890.287:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.36:22-10.0.0.1:39844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:24:50.292428 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:24:50.292676 systemd[1]: session-7.scope: Consumed 7.271s CPU time, 208.3M memory peak. Dec 16 12:24:50.300987 systemd-logind[1558]: Removed session 7. Dec 16 12:24:50.672000 audit[3181]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.679938 kernel: audit: type=1325 audit(1765887890.672:519): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.680075 kernel: audit: type=1300 audit(1765887890.672:519): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc28fc460 a2=0 a3=1 items=0 ppid=2883 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.672000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc28fc460 a2=0 a3=1 items=0 ppid=2883 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:50.686476 kernel: audit: type=1327 audit(1765887890.672:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:50.682000 audit[3181]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.688387 kernel: audit: type=1325 audit(1765887890.682:520): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.682000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc28fc460 a2=0 a3=1 items=0 ppid=2883 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.692555 kernel: audit: type=1300 audit(1765887890.682:520): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc28fc460 a2=0 a3=1 items=0 ppid=2883 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:50.709000 audit[3183]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.709000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffcda10e0 a2=0 a3=1 items=0 ppid=2883 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:50.719000 audit[3183]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:50.719000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcda10e0 a2=0 a3=1 items=0 ppid=2883 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:50.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:54.362000 audit[3187]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:54.362000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc18954e0 a2=0 a3=1 items=0 ppid=2883 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:54.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:54.372000 audit[3187]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:54.372000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc18954e0 a2=0 a3=1 items=0 ppid=2883 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:54.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:54.391000 audit[3189]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:54.391000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd3f330a0 a2=0 a3=1 items=0 ppid=2883 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:54.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:54.399000 audit[3189]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:54.399000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3f330a0 a2=0 a3=1 items=0 ppid=2883 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:54.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:55.415000 audit[3191]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:55.420325 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:24:55.420490 kernel: audit: type=1325 audit(1765887895.415:527): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:55.415000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdb26ac10 a2=0 a3=1 items=0 ppid=2883 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:55.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:55.427976 kernel: audit: type=1300 audit(1765887895.415:527): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdb26ac10 a2=0 a3=1 items=0 ppid=2883 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:55.428086 kernel: audit: type=1327 audit(1765887895.415:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:55.426000 audit[3191]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:55.430104 kernel: audit: type=1325 audit(1765887895.426:528): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:55.426000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb26ac10 a2=0 a3=1 items=0 ppid=2883 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:55.434258 kernel: audit: type=1300 audit(1765887895.426:528): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb26ac10 a2=0 a3=1 items=0 ppid=2883 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:55.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:55.436661 kernel: audit: type=1327 audit(1765887895.426:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.434000 audit[3193]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.434000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe4690a00 a2=0 a3=1 items=0 ppid=2883 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.441974 kernel: audit: type=1325 audit(1765887897.434:529): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.442087 kernel: audit: type=1300 audit(1765887897.434:529): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe4690a00 a2=0 a3=1 items=0 ppid=2883 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.444472 kernel: audit: type=1327 audit(1765887897.434:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.448000 audit[3193]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.448000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe4690a00 a2=0 a3=1 items=0 ppid=2883 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.451922 kernel: audit: type=1325 audit(1765887897.448:530): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.473822 kubelet[2747]: I1216 12:24:57.473735 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mqkkv" podStartSLOduration=14.471764406 podStartE2EDuration="18.473718252s" podCreationTimestamp="2025-12-16 12:24:39 +0000 UTC" firstStartedPulling="2025-12-16 12:24:40.114943858 +0000 UTC m=+7.497676081" lastFinishedPulling="2025-12-16 12:24:44.116897664 +0000 UTC m=+11.499629927" observedRunningTime="2025-12-16 12:24:44.783007475 +0000 UTC m=+12.165739698" watchObservedRunningTime="2025-12-16 12:24:57.473718252 +0000 UTC m=+24.856450515" Dec 16 12:24:57.480000 audit[3195]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.480000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe10c5eb0 a2=0 a3=1 items=0 ppid=2883 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.490000 audit[3195]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:57.490000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe10c5eb0 a2=0 a3=1 items=0 ppid=2883 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:57.504704 systemd[1]: Created slice kubepods-besteffort-podffb6a89c_615a_459c_84d9_148076f0c50f.slice - libcontainer container kubepods-besteffort-podffb6a89c_615a_459c_84d9_148076f0c50f.slice. Dec 16 12:24:57.569846 kubelet[2747]: I1216 12:24:57.569787 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb6a89c-615a-459c-84d9-148076f0c50f-tigera-ca-bundle\") pod \"calico-typha-996b577cb-wprv2\" (UID: \"ffb6a89c-615a-459c-84d9-148076f0c50f\") " pod="calico-system/calico-typha-996b577cb-wprv2" Dec 16 12:24:57.569846 kubelet[2747]: I1216 12:24:57.569841 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ffb6a89c-615a-459c-84d9-148076f0c50f-typha-certs\") pod \"calico-typha-996b577cb-wprv2\" (UID: \"ffb6a89c-615a-459c-84d9-148076f0c50f\") " pod="calico-system/calico-typha-996b577cb-wprv2" Dec 16 12:24:57.569846 kubelet[2747]: I1216 12:24:57.569865 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2qh\" (UniqueName: \"kubernetes.io/projected/ffb6a89c-615a-459c-84d9-148076f0c50f-kube-api-access-sd2qh\") pod \"calico-typha-996b577cb-wprv2\" (UID: \"ffb6a89c-615a-459c-84d9-148076f0c50f\") " pod="calico-system/calico-typha-996b577cb-wprv2" Dec 16 12:24:57.600197 systemd[1]: Created slice kubepods-besteffort-pod47e9c68e_e1bc_459a_aa64_9ab28c23b00f.slice - libcontainer container kubepods-besteffort-pod47e9c68e_e1bc_459a_aa64_9ab28c23b00f.slice. Dec 16 12:24:57.671465 kubelet[2747]: I1216 12:24:57.671403 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-xtables-lock\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671465 kubelet[2747]: I1216 12:24:57.671469 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-tigera-ca-bundle\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671646 kubelet[2747]: I1216 12:24:57.671489 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-flexvol-driver-host\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671646 kubelet[2747]: I1216 12:24:57.671538 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-policysync\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671646 kubelet[2747]: I1216 12:24:57.671594 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-cni-bin-dir\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671646 kubelet[2747]: I1216 12:24:57.671618 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-var-run-calico\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671730 kubelet[2747]: I1216 12:24:57.671693 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-cni-log-dir\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671784 kubelet[2747]: I1216 12:24:57.671749 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-cni-net-dir\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671784 kubelet[2747]: I1216 12:24:57.671768 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knjf\" (UniqueName: \"kubernetes.io/projected/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-kube-api-access-9knjf\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671857 kubelet[2747]: I1216 12:24:57.671783 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-lib-modules\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671857 kubelet[2747]: I1216 12:24:57.671800 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-var-lib-calico\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.671857 kubelet[2747]: I1216 12:24:57.671831 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47e9c68e-e1bc-459a-aa64-9ab28c23b00f-node-certs\") pod \"calico-node-6wxbl\" (UID: \"47e9c68e-e1bc-459a-aa64-9ab28c23b00f\") " pod="calico-system/calico-node-6wxbl" Dec 16 12:24:57.715532 kubelet[2747]: E1216 12:24:57.715314 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:24:57.772488 kubelet[2747]: I1216 12:24:57.772296 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/179aa3f5-01af-4f0c-91ba-27b0e8267d2b-socket-dir\") pod \"csi-node-driver-ndhz8\" (UID: \"179aa3f5-01af-4f0c-91ba-27b0e8267d2b\") " pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:24:57.772488 kubelet[2747]: I1216 12:24:57.772345 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnztb\" (UniqueName: \"kubernetes.io/projected/179aa3f5-01af-4f0c-91ba-27b0e8267d2b-kube-api-access-bnztb\") pod \"csi-node-driver-ndhz8\" (UID: \"179aa3f5-01af-4f0c-91ba-27b0e8267d2b\") " pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:24:57.772918 kubelet[2747]: I1216 12:24:57.772866 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/179aa3f5-01af-4f0c-91ba-27b0e8267d2b-kubelet-dir\") pod \"csi-node-driver-ndhz8\" (UID: \"179aa3f5-01af-4f0c-91ba-27b0e8267d2b\") " pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:24:57.772982 kubelet[2747]: I1216 12:24:57.772919 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/179aa3f5-01af-4f0c-91ba-27b0e8267d2b-registration-dir\") pod \"csi-node-driver-ndhz8\" (UID: \"179aa3f5-01af-4f0c-91ba-27b0e8267d2b\") " pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:24:57.774680 kubelet[2747]: I1216 12:24:57.774116 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/179aa3f5-01af-4f0c-91ba-27b0e8267d2b-varrun\") pod \"csi-node-driver-ndhz8\" (UID: \"179aa3f5-01af-4f0c-91ba-27b0e8267d2b\") " pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:24:57.777265 kubelet[2747]: E1216 12:24:57.777230 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.777265 kubelet[2747]: W1216 12:24:57.777264 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.778885 kubelet[2747]: E1216 12:24:57.778616 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.785779 kubelet[2747]: E1216 12:24:57.785719 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.786157 kubelet[2747]: W1216 12:24:57.785744 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.786157 kubelet[2747]: E1216 12:24:57.785940 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.797945 kubelet[2747]: E1216 12:24:57.797661 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.797945 kubelet[2747]: W1216 12:24:57.797692 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.797945 kubelet[2747]: E1216 12:24:57.797718 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.807971 kubelet[2747]: E1216 12:24:57.807933 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:57.808654 containerd[1581]: time="2025-12-16T12:24:57.808611232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996b577cb-wprv2,Uid:ffb6a89c-615a-459c-84d9-148076f0c50f,Namespace:calico-system,Attempt:0,}" Dec 16 12:24:57.875647 kubelet[2747]: E1216 12:24:57.875595 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.875647 kubelet[2747]: W1216 12:24:57.875621 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.876953 kubelet[2747]: E1216 12:24:57.875809 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.876953 kubelet[2747]: E1216 12:24:57.876262 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.876953 kubelet[2747]: W1216 12:24:57.876278 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.876953 kubelet[2747]: E1216 12:24:57.876293 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.877545 kubelet[2747]: E1216 12:24:57.877510 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.877545 kubelet[2747]: W1216 12:24:57.877532 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.877820 kubelet[2747]: E1216 12:24:57.877557 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.877820 kubelet[2747]: E1216 12:24:57.877881 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.877820 kubelet[2747]: W1216 12:24:57.877895 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.877820 kubelet[2747]: E1216 12:24:57.877923 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.878184 kubelet[2747]: E1216 12:24:57.878158 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.878184 kubelet[2747]: W1216 12:24:57.878174 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.878267 kubelet[2747]: E1216 12:24:57.878187 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.883581 kubelet[2747]: E1216 12:24:57.881723 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.883581 kubelet[2747]: W1216 12:24:57.881746 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.883581 kubelet[2747]: E1216 12:24:57.881770 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.883756 kubelet[2747]: E1216 12:24:57.883652 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.883756 kubelet[2747]: W1216 12:24:57.883681 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.883756 kubelet[2747]: E1216 12:24:57.883710 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.884658 kubelet[2747]: E1216 12:24:57.884602 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.884658 kubelet[2747]: W1216 12:24:57.884631 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.884658 kubelet[2747]: E1216 12:24:57.884655 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.886040 kubelet[2747]: E1216 12:24:57.886006 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.886040 kubelet[2747]: W1216 12:24:57.886032 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.886191 kubelet[2747]: E1216 12:24:57.886053 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886265 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.886979 kubelet[2747]: W1216 12:24:57.886282 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886293 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886471 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.886979 kubelet[2747]: W1216 12:24:57.886481 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886490 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886646 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.886979 kubelet[2747]: W1216 12:24:57.886654 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886662 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.886979 kubelet[2747]: E1216 12:24:57.886814 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.887298 kubelet[2747]: W1216 12:24:57.886822 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.887298 kubelet[2747]: E1216 12:24:57.886830 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.887517 kubelet[2747]: E1216 12:24:57.887483 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.887517 kubelet[2747]: W1216 12:24:57.887509 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.887580 kubelet[2747]: E1216 12:24:57.887523 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.887758 kubelet[2747]: E1216 12:24:57.887736 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.887758 kubelet[2747]: W1216 12:24:57.887749 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.887758 kubelet[2747]: E1216 12:24:57.887759 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.887981 kubelet[2747]: E1216 12:24:57.887964 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.887981 kubelet[2747]: W1216 12:24:57.887976 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.888032 kubelet[2747]: E1216 12:24:57.887988 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.888181 kubelet[2747]: E1216 12:24:57.888160 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.888181 kubelet[2747]: W1216 12:24:57.888172 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.888271 kubelet[2747]: E1216 12:24:57.888181 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.889683 kubelet[2747]: E1216 12:24:57.889653 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.889683 kubelet[2747]: W1216 12:24:57.889678 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.889792 kubelet[2747]: E1216 12:24:57.889696 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.891069 kubelet[2747]: E1216 12:24:57.891037 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.891069 kubelet[2747]: W1216 12:24:57.891061 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.891193 kubelet[2747]: E1216 12:24:57.891082 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.892129 kubelet[2747]: E1216 12:24:57.892103 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.892216 kubelet[2747]: W1216 12:24:57.892130 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.892216 kubelet[2747]: E1216 12:24:57.892146 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.893960 kubelet[2747]: E1216 12:24:57.893728 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.893960 kubelet[2747]: W1216 12:24:57.893755 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.893960 kubelet[2747]: E1216 12:24:57.893774 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.896338 kubelet[2747]: E1216 12:24:57.896303 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.896338 kubelet[2747]: W1216 12:24:57.896329 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.896495 kubelet[2747]: E1216 12:24:57.896351 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.898454 kubelet[2747]: E1216 12:24:57.898420 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.898454 kubelet[2747]: W1216 12:24:57.898448 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.898611 kubelet[2747]: E1216 12:24:57.898471 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.899841 kubelet[2747]: E1216 12:24:57.899468 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.899841 kubelet[2747]: W1216 12:24:57.899494 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.899841 kubelet[2747]: E1216 12:24:57.899512 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.900286 kubelet[2747]: E1216 12:24:57.900252 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.900286 kubelet[2747]: W1216 12:24:57.900274 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.900355 kubelet[2747]: E1216 12:24:57.900289 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.904937 kubelet[2747]: E1216 12:24:57.904762 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:57.905696 containerd[1581]: time="2025-12-16T12:24:57.905636486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6wxbl,Uid:47e9c68e-e1bc-459a-aa64-9ab28c23b00f,Namespace:calico-system,Attempt:0,}" Dec 16 12:24:57.912932 kubelet[2747]: E1216 12:24:57.912875 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:24:57.912932 kubelet[2747]: W1216 12:24:57.912904 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:24:57.913282 kubelet[2747]: E1216 12:24:57.912952 2747 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:24:57.942977 containerd[1581]: time="2025-12-16T12:24:57.942265867Z" level=info msg="connecting to shim 50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db" address="unix:///run/containerd/s/dd87643022ed12a5501684a525666e0ca7c44c6714f9bb53f8e866f8a6a3f3a3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:57.964269 containerd[1581]: time="2025-12-16T12:24:57.964205023Z" level=info msg="connecting to shim 7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9" address="unix:///run/containerd/s/a7659594420ab78619e3c67413772330e3aa9b77741d45c6df93d0d492fcdd2b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:57.980242 systemd[1]: Started cri-containerd-50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db.scope - libcontainer container 50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db. Dec 16 12:24:57.996000 audit: BPF prog-id=151 op=LOAD Dec 16 12:24:57.996000 audit: BPF prog-id=152 op=LOAD Dec 16 12:24:57.996000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:57.996000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:24:57.996000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:57.998000 audit: BPF prog-id=153 op=LOAD Dec 16 12:24:57.998000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:57.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:58.000000 audit: BPF prog-id=154 op=LOAD Dec 16 12:24:58.000000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:58.000000 audit: BPF prog-id=154 op=UNLOAD Dec 16 12:24:58.000000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:58.000000 audit: BPF prog-id=153 op=UNLOAD Dec 16 12:24:58.000000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:58.000000 audit: BPF prog-id=155 op=LOAD Dec 16 12:24:58.000000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3238 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530663863626661653335356565653031656562346430633165353666 Dec 16 12:24:58.014206 systemd[1]: Started cri-containerd-7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9.scope - libcontainer container 7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9. Dec 16 12:24:58.030000 audit: BPF prog-id=156 op=LOAD Dec 16 12:24:58.032000 audit: BPF prog-id=157 op=LOAD Dec 16 12:24:58.032000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.032000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:24:58.032000 audit[3280]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.033000 audit: BPF prog-id=158 op=LOAD Dec 16 12:24:58.033000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.033000 audit: BPF prog-id=159 op=LOAD Dec 16 12:24:58.033000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.033000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:24:58.033000 audit[3280]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.033000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:24:58.033000 audit[3280]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.033000 audit: BPF prog-id=160 op=LOAD Dec 16 12:24:58.033000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3263 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653464663230366339656636373564393534396636386434306233 Dec 16 12:24:58.049552 containerd[1581]: time="2025-12-16T12:24:58.049493148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996b577cb-wprv2,Uid:ffb6a89c-615a-459c-84d9-148076f0c50f,Namespace:calico-system,Attempt:0,} returns sandbox id \"50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db\"" Dec 16 12:24:58.057220 containerd[1581]: time="2025-12-16T12:24:58.057173265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6wxbl,Uid:47e9c68e-e1bc-459a-aa64-9ab28c23b00f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\"" Dec 16 12:24:58.058953 kubelet[2747]: E1216 12:24:58.058664 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:58.059145 kubelet[2747]: E1216 12:24:58.059090 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:24:58.062406 containerd[1581]: time="2025-12-16T12:24:58.062369120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:24:58.516000 audit[3320]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:58.516000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd82c0020 a2=0 a3=1 items=0 ppid=2883 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:58.523000 audit[3320]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:24:58.523000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd82c0020 a2=0 a3=1 items=0 ppid=2883 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:58.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:24:59.025537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512847314.mount: Deactivated successfully. Dec 16 12:24:59.098093 containerd[1581]: time="2025-12-16T12:24:59.098032226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:59.100893 containerd[1581]: time="2025-12-16T12:24:59.100822691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:24:59.102608 containerd[1581]: time="2025-12-16T12:24:59.102539411Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:59.105424 containerd[1581]: time="2025-12-16T12:24:59.105367083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:59.106237 containerd[1581]: time="2025-12-16T12:24:59.106202218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.043788289s" Dec 16 12:24:59.106340 containerd[1581]: time="2025-12-16T12:24:59.106240546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:24:59.108016 containerd[1581]: time="2025-12-16T12:24:59.107945904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:24:59.122376 containerd[1581]: time="2025-12-16T12:24:59.121804608Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:24:59.147954 containerd[1581]: time="2025-12-16T12:24:59.146802886Z" level=info msg="Container 448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:59.160517 containerd[1581]: time="2025-12-16T12:24:59.160456827Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b\"" Dec 16 12:24:59.161501 containerd[1581]: time="2025-12-16T12:24:59.161450235Z" level=info msg="StartContainer for \"448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b\"" Dec 16 12:24:59.163441 containerd[1581]: time="2025-12-16T12:24:59.163385921Z" level=info msg="connecting to shim 448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b" address="unix:///run/containerd/s/a7659594420ab78619e3c67413772330e3aa9b77741d45c6df93d0d492fcdd2b" protocol=ttrpc version=3 Dec 16 12:24:59.197229 systemd[1]: Started cri-containerd-448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b.scope - libcontainer container 448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b. Dec 16 12:24:59.256000 audit: BPF prog-id=161 op=LOAD Dec 16 12:24:59.256000 audit[3329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:59.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434383836336536613335643366663034366635373635373833336439 Dec 16 12:24:59.256000 audit: BPF prog-id=162 op=LOAD Dec 16 12:24:59.256000 audit[3329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:59.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434383836336536613335643366663034366635373635373833336439 Dec 16 12:24:59.256000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:24:59.256000 audit[3329]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:59.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434383836336536613335643366663034366635373635373833336439 Dec 16 12:24:59.256000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:24:59.256000 audit[3329]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:59.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434383836336536613335643366663034366635373635373833336439 Dec 16 12:24:59.256000 audit: BPF prog-id=163 op=LOAD Dec 16 12:24:59.256000 audit[3329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:24:59.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434383836336536613335643366663034366635373635373833336439 Dec 16 12:24:59.293140 systemd[1]: cri-containerd-448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b.scope: Deactivated successfully. Dec 16 12:24:59.294777 containerd[1581]: time="2025-12-16T12:24:59.294743485Z" level=info msg="StartContainer for \"448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b\" returns successfully" Dec 16 12:24:59.299000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:24:59.318382 containerd[1581]: time="2025-12-16T12:24:59.318313704Z" level=info msg="received container exit event container_id:\"448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b\" id:\"448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b\" pid:3341 exited_at:{seconds:1765887899 nanos:312812912}" Dec 16 12:24:59.678530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-448863e6a35d3ff046f57657833d93036a90f483e701684eb8eb06a8dd7cac1b-rootfs.mount: Deactivated successfully. Dec 16 12:24:59.728840 kubelet[2747]: E1216 12:24:59.728774 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:24:59.810587 kubelet[2747]: E1216 12:24:59.810552 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:00.577205 containerd[1581]: time="2025-12-16T12:25:00.577102362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:00.578565 containerd[1581]: time="2025-12-16T12:25:00.578348413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:00.579582 containerd[1581]: time="2025-12-16T12:25:00.579544934Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:00.581811 containerd[1581]: time="2025-12-16T12:25:00.581749578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:00.582697 containerd[1581]: time="2025-12-16T12:25:00.582566702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.474576989s" Dec 16 12:25:00.582697 containerd[1581]: time="2025-12-16T12:25:00.582600909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:25:00.583671 containerd[1581]: time="2025-12-16T12:25:00.583639518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:25:00.608920 containerd[1581]: time="2025-12-16T12:25:00.608853513Z" level=info msg="CreateContainer within sandbox \"50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:25:00.620062 containerd[1581]: time="2025-12-16T12:25:00.619095335Z" level=info msg="Container 43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:00.630364 containerd[1581]: time="2025-12-16T12:25:00.630309873Z" level=info msg="CreateContainer within sandbox \"50f8cbfae355eee01eeb4d0c1e56f0fd1f68d57fd2af9d644fccf78b4913e2db\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c\"" Dec 16 12:25:00.630912 containerd[1581]: time="2025-12-16T12:25:00.630882708Z" level=info msg="StartContainer for \"43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c\"" Dec 16 12:25:00.634105 containerd[1581]: time="2025-12-16T12:25:00.634052266Z" level=info msg="connecting to shim 43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c" address="unix:///run/containerd/s/dd87643022ed12a5501684a525666e0ca7c44c6714f9bb53f8e866f8a6a3f3a3" protocol=ttrpc version=3 Dec 16 12:25:00.658221 systemd[1]: Started cri-containerd-43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c.scope - libcontainer container 43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c. Dec 16 12:25:00.671000 audit: BPF prog-id=164 op=LOAD Dec 16 12:25:00.673367 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 12:25:00.673430 kernel: audit: type=1334 audit(1765887900.671:557): prog-id=164 op=LOAD Dec 16 12:25:00.673000 audit: BPF prog-id=165 op=LOAD Dec 16 12:25:00.675395 kernel: audit: type=1334 audit(1765887900.673:558): prog-id=165 op=LOAD Dec 16 12:25:00.675449 kernel: audit: type=1300 audit(1765887900.673:558): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.673000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.681009 kernel: audit: type=1327 audit(1765887900.673:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.673000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:25:00.682357 kernel: audit: type=1334 audit(1765887900.673:559): prog-id=165 op=UNLOAD Dec 16 12:25:00.682411 kernel: audit: type=1300 audit(1765887900.673:559): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.673000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.685318 kernel: audit: type=1327 audit(1765887900.673:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.689236 kernel: audit: type=1334 audit(1765887900.673:560): prog-id=166 op=LOAD Dec 16 12:25:00.673000 audit: BPF prog-id=166 op=LOAD Dec 16 12:25:00.673000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.693465 kernel: audit: type=1300 audit(1765887900.673:560): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.698080 kernel: audit: type=1327 audit(1765887900.673:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.680000 audit: BPF prog-id=167 op=LOAD Dec 16 12:25:00.680000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.680000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:25:00.680000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.680000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:25:00.680000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.680000 audit: BPF prog-id=168 op=LOAD Dec 16 12:25:00.680000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3238 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:00.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433613238643135616637656262336236373838616133643632346534 Dec 16 12:25:00.738508 containerd[1581]: time="2025-12-16T12:25:00.738316574Z" level=info msg="StartContainer for \"43a28d15af7ebb3b6788aa3d624e49386555a9b3b3d828190c107d2cf97a3d5c\" returns successfully" Dec 16 12:25:00.818219 kubelet[2747]: E1216 12:25:00.818100 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:01.729035 kubelet[2747]: E1216 12:25:01.728707 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:01.820181 kubelet[2747]: I1216 12:25:01.820103 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:25:01.821175 kubelet[2747]: E1216 12:25:01.821102 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:02.996135 containerd[1581]: time="2025-12-16T12:25:02.996057180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:02.997263 containerd[1581]: time="2025-12-16T12:25:02.997191912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:25:02.998951 containerd[1581]: time="2025-12-16T12:25:02.998860062Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:03.001940 containerd[1581]: time="2025-12-16T12:25:03.001264708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:03.002117 containerd[1581]: time="2025-12-16T12:25:03.002091216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.418410371s" Dec 16 12:25:03.002190 containerd[1581]: time="2025-12-16T12:25:03.002175672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:25:03.008515 containerd[1581]: time="2025-12-16T12:25:03.008474762Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:25:03.018285 containerd[1581]: time="2025-12-16T12:25:03.018209350Z" level=info msg="Container 233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:03.034122 containerd[1581]: time="2025-12-16T12:25:03.034052834Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98\"" Dec 16 12:25:03.034870 containerd[1581]: time="2025-12-16T12:25:03.034789727Z" level=info msg="StartContainer for \"233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98\"" Dec 16 12:25:03.036627 containerd[1581]: time="2025-12-16T12:25:03.036597131Z" level=info msg="connecting to shim 233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98" address="unix:///run/containerd/s/a7659594420ab78619e3c67413772330e3aa9b77741d45c6df93d0d492fcdd2b" protocol=ttrpc version=3 Dec 16 12:25:03.063160 systemd[1]: Started cri-containerd-233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98.scope - libcontainer container 233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98. Dec 16 12:25:03.117000 audit: BPF prog-id=169 op=LOAD Dec 16 12:25:03.117000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3263 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:03.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233336266313733306239363462653161316431336164303737646235 Dec 16 12:25:03.118000 audit: BPF prog-id=170 op=LOAD Dec 16 12:25:03.118000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3263 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:03.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233336266313733306239363462653161316431336164303737646235 Dec 16 12:25:03.118000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:25:03.118000 audit[3432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:03.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233336266313733306239363462653161316431336164303737646235 Dec 16 12:25:03.118000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:25:03.118000 audit[3432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:03.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233336266313733306239363462653161316431336164303737646235 Dec 16 12:25:03.118000 audit: BPF prog-id=171 op=LOAD Dec 16 12:25:03.118000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3263 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:03.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233336266313733306239363462653161316431336164303737646235 Dec 16 12:25:03.153457 containerd[1581]: time="2025-12-16T12:25:03.153324767Z" level=info msg="StartContainer for \"233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98\" returns successfully" Dec 16 12:25:03.728945 kubelet[2747]: E1216 12:25:03.728731 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:03.803997 systemd[1]: cri-containerd-233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98.scope: Deactivated successfully. Dec 16 12:25:03.805551 containerd[1581]: time="2025-12-16T12:25:03.805510294Z" level=info msg="received container exit event container_id:\"233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98\" id:\"233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98\" pid:3445 exited_at:{seconds:1765887903 nanos:805055452}" Dec 16 12:25:03.806121 systemd[1]: cri-containerd-233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98.scope: Consumed 488ms CPU time, 172.6M memory peak, 2.5M read from disk, 165.9M written to disk. Dec 16 12:25:03.810000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:25:03.831487 kubelet[2747]: E1216 12:25:03.831449 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:03.832461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233bf1730b964be1a1d13ad077db51e3640d3daa9219fd5797cb2a752bb79f98-rootfs.mount: Deactivated successfully. Dec 16 12:25:03.873403 kubelet[2747]: I1216 12:25:03.873229 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-996b577cb-wprv2" podStartSLOduration=4.351333034 podStartE2EDuration="6.873212448s" podCreationTimestamp="2025-12-16 12:24:57 +0000 UTC" firstStartedPulling="2025-12-16 12:24:58.061618796 +0000 UTC m=+25.444351059" lastFinishedPulling="2025-12-16 12:25:00.58349821 +0000 UTC m=+27.966230473" observedRunningTime="2025-12-16 12:25:00.836590076 +0000 UTC m=+28.219322339" watchObservedRunningTime="2025-12-16 12:25:03.873212448 +0000 UTC m=+31.255944711" Dec 16 12:25:03.890216 kubelet[2747]: I1216 12:25:03.890171 2747 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:25:04.027140 systemd[1]: Created slice kubepods-burstable-pod8b94d2bc_c5a4_45dc_8d85_f25fcbe8bb64.slice - libcontainer container kubepods-burstable-pod8b94d2bc_c5a4_45dc_8d85_f25fcbe8bb64.slice. Dec 16 12:25:04.052605 systemd[1]: Created slice kubepods-besteffort-pod7dc3261b_d36f_4639_8c3f_f9eff73dc960.slice - libcontainer container kubepods-besteffort-pod7dc3261b_d36f_4639_8c3f_f9eff73dc960.slice. Dec 16 12:25:04.057327 systemd[1]: Created slice kubepods-besteffort-pod1414ae41_c2cb_4936_90b7_c8854a1bb586.slice - libcontainer container kubepods-besteffort-pod1414ae41_c2cb_4936_90b7_c8854a1bb586.slice. Dec 16 12:25:04.063960 systemd[1]: Created slice kubepods-besteffort-podcbbe966c_0a8c_4af7_b5f1_2e4d5d293544.slice - libcontainer container kubepods-besteffort-podcbbe966c_0a8c_4af7_b5f1_2e4d5d293544.slice. Dec 16 12:25:04.072535 systemd[1]: Created slice kubepods-burstable-pod3b16c732_4418_4cfc_b3b1_1b82c89afd86.slice - libcontainer container kubepods-burstable-pod3b16c732_4418_4cfc_b3b1_1b82c89afd86.slice. Dec 16 12:25:04.076321 systemd[1]: Created slice kubepods-besteffort-podc186cd40_d4dc_48c3_8fe5_5af674baa410.slice - libcontainer container kubepods-besteffort-podc186cd40_d4dc_48c3_8fe5_5af674baa410.slice. Dec 16 12:25:04.083845 systemd[1]: Created slice kubepods-besteffort-pod9f50ced9_6722_4b8a_92ec_e6e3732665dc.slice - libcontainer container kubepods-besteffort-pod9f50ced9_6722_4b8a_92ec_e6e3732665dc.slice. Dec 16 12:25:04.131737 kubelet[2747]: I1216 12:25:04.131664 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64-config-volume\") pod \"coredns-674b8bbfcf-p6dhb\" (UID: \"8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64\") " pod="kube-system/coredns-674b8bbfcf-p6dhb" Dec 16 12:25:04.131737 kubelet[2747]: I1216 12:25:04.131716 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1414ae41-c2cb-4936-90b7-c8854a1bb586-calico-apiserver-certs\") pod \"calico-apiserver-67754b54bf-t5zll\" (UID: \"1414ae41-c2cb-4936-90b7-c8854a1bb586\") " pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" Dec 16 12:25:04.131737 kubelet[2747]: I1216 12:25:04.131742 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dc3261b-d36f-4639-8c3f-f9eff73dc960-tigera-ca-bundle\") pod \"calico-kube-controllers-7dccf794c6-mwtbf\" (UID: \"7dc3261b-d36f-4639-8c3f-f9eff73dc960\") " pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" Dec 16 12:25:04.132148 kubelet[2747]: I1216 12:25:04.131874 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c186cd40-d4dc-48c3-8fe5-5af674baa410-goldmane-key-pair\") pod \"goldmane-666569f655-n9dd2\" (UID: \"c186cd40-d4dc-48c3-8fe5-5af674baa410\") " pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.132148 kubelet[2747]: I1216 12:25:04.132015 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f50ced9-6722-4b8a-92ec-e6e3732665dc-calico-apiserver-certs\") pod \"calico-apiserver-67754b54bf-dz9mw\" (UID: \"9f50ced9-6722-4b8a-92ec-e6e3732665dc\") " pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" Dec 16 12:25:04.132148 kubelet[2747]: I1216 12:25:04.132043 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-ca-bundle\") pod \"whisker-859b8d64d8-9dh4b\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " pod="calico-system/whisker-859b8d64d8-9dh4b" Dec 16 12:25:04.132148 kubelet[2747]: I1216 12:25:04.132059 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdxb\" (UniqueName: \"kubernetes.io/projected/9f50ced9-6722-4b8a-92ec-e6e3732665dc-kube-api-access-rhdxb\") pod \"calico-apiserver-67754b54bf-dz9mw\" (UID: \"9f50ced9-6722-4b8a-92ec-e6e3732665dc\") " pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" Dec 16 12:25:04.132148 kubelet[2747]: I1216 12:25:04.132083 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9zx\" (UniqueName: \"kubernetes.io/projected/3b16c732-4418-4cfc-b3b1-1b82c89afd86-kube-api-access-wp9zx\") pod \"coredns-674b8bbfcf-25l4t\" (UID: \"3b16c732-4418-4cfc-b3b1-1b82c89afd86\") " pod="kube-system/coredns-674b8bbfcf-25l4t" Dec 16 12:25:04.132266 kubelet[2747]: I1216 12:25:04.132105 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2kj\" (UniqueName: \"kubernetes.io/projected/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-kube-api-access-rn2kj\") pod \"whisker-859b8d64d8-9dh4b\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " pod="calico-system/whisker-859b8d64d8-9dh4b" Dec 16 12:25:04.132266 kubelet[2747]: I1216 12:25:04.132122 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlhx\" (UniqueName: \"kubernetes.io/projected/7dc3261b-d36f-4639-8c3f-f9eff73dc960-kube-api-access-njlhx\") pod \"calico-kube-controllers-7dccf794c6-mwtbf\" (UID: \"7dc3261b-d36f-4639-8c3f-f9eff73dc960\") " pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" Dec 16 12:25:04.132266 kubelet[2747]: I1216 12:25:04.132145 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvdx\" (UniqueName: \"kubernetes.io/projected/8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64-kube-api-access-9vvdx\") pod \"coredns-674b8bbfcf-p6dhb\" (UID: \"8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64\") " pod="kube-system/coredns-674b8bbfcf-p6dhb" Dec 16 12:25:04.132266 kubelet[2747]: I1216 12:25:04.132162 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmxv\" (UniqueName: \"kubernetes.io/projected/c186cd40-d4dc-48c3-8fe5-5af674baa410-kube-api-access-qvmxv\") pod \"goldmane-666569f655-n9dd2\" (UID: \"c186cd40-d4dc-48c3-8fe5-5af674baa410\") " pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.132266 kubelet[2747]: I1216 12:25:04.132181 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b16c732-4418-4cfc-b3b1-1b82c89afd86-config-volume\") pod \"coredns-674b8bbfcf-25l4t\" (UID: \"3b16c732-4418-4cfc-b3b1-1b82c89afd86\") " pod="kube-system/coredns-674b8bbfcf-25l4t" Dec 16 12:25:04.132379 kubelet[2747]: I1216 12:25:04.132197 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjdb\" (UniqueName: \"kubernetes.io/projected/1414ae41-c2cb-4936-90b7-c8854a1bb586-kube-api-access-mvjdb\") pod \"calico-apiserver-67754b54bf-t5zll\" (UID: \"1414ae41-c2cb-4936-90b7-c8854a1bb586\") " pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" Dec 16 12:25:04.132379 kubelet[2747]: I1216 12:25:04.132217 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c186cd40-d4dc-48c3-8fe5-5af674baa410-config\") pod \"goldmane-666569f655-n9dd2\" (UID: \"c186cd40-d4dc-48c3-8fe5-5af674baa410\") " pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.132379 kubelet[2747]: I1216 12:25:04.132232 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c186cd40-d4dc-48c3-8fe5-5af674baa410-goldmane-ca-bundle\") pod \"goldmane-666569f655-n9dd2\" (UID: \"c186cd40-d4dc-48c3-8fe5-5af674baa410\") " pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.132379 kubelet[2747]: I1216 12:25:04.132291 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-backend-key-pair\") pod \"whisker-859b8d64d8-9dh4b\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " pod="calico-system/whisker-859b8d64d8-9dh4b" Dec 16 12:25:04.341973 kubelet[2747]: E1216 12:25:04.341825 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:04.344164 containerd[1581]: time="2025-12-16T12:25:04.344112203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p6dhb,Uid:8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:04.357001 containerd[1581]: time="2025-12-16T12:25:04.356902018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dccf794c6-mwtbf,Uid:7dc3261b-d36f-4639-8c3f-f9eff73dc960,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:04.360451 containerd[1581]: time="2025-12-16T12:25:04.360270521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-t5zll,Uid:1414ae41-c2cb-4936-90b7-c8854a1bb586,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:25:04.368188 containerd[1581]: time="2025-12-16T12:25:04.368142524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b8d64d8-9dh4b,Uid:cbbe966c-0a8c-4af7-b5f1-2e4d5d293544,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:04.375628 kubelet[2747]: E1216 12:25:04.375580 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:04.376389 containerd[1581]: time="2025-12-16T12:25:04.376329822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25l4t,Uid:3b16c732-4418-4cfc-b3b1-1b82c89afd86,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:04.383185 containerd[1581]: time="2025-12-16T12:25:04.383145322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9dd2,Uid:c186cd40-d4dc-48c3-8fe5-5af674baa410,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:04.387892 containerd[1581]: time="2025-12-16T12:25:04.387604574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-dz9mw,Uid:9f50ced9-6722-4b8a-92ec-e6e3732665dc,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:25:04.499821 containerd[1581]: time="2025-12-16T12:25:04.499758195Z" level=error msg="Failed to destroy network for sandbox \"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.503132 containerd[1581]: time="2025-12-16T12:25:04.503071129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-t5zll,Uid:1414ae41-c2cb-4936-90b7-c8854a1bb586,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.508965 kubelet[2747]: E1216 12:25:04.508882 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.509133 kubelet[2747]: E1216 12:25:04.509002 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" Dec 16 12:25:04.509133 kubelet[2747]: E1216 12:25:04.509033 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" Dec 16 12:25:04.509133 kubelet[2747]: E1216 12:25:04.509114 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67754b54bf-t5zll_calico-apiserver(1414ae41-c2cb-4936-90b7-c8854a1bb586)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67754b54bf-t5zll_calico-apiserver(1414ae41-c2cb-4936-90b7-c8854a1bb586)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9590cebe328f2320d78934f3af7a80eacf46dd628df6d5d74daa107cd080c5b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:04.520784 containerd[1581]: time="2025-12-16T12:25:04.520735227Z" level=error msg="Failed to destroy network for sandbox \"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.522726 containerd[1581]: time="2025-12-16T12:25:04.522678564Z" level=error msg="Failed to destroy network for sandbox \"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.522977 containerd[1581]: time="2025-12-16T12:25:04.522813427Z" level=error msg="Failed to destroy network for sandbox \"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.523419 containerd[1581]: time="2025-12-16T12:25:04.523359722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b8d64d8-9dh4b,Uid:cbbe966c-0a8c-4af7-b5f1-2e4d5d293544,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.523656 kubelet[2747]: E1216 12:25:04.523607 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.523725 kubelet[2747]: E1216 12:25:04.523675 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-859b8d64d8-9dh4b" Dec 16 12:25:04.523725 kubelet[2747]: E1216 12:25:04.523696 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-859b8d64d8-9dh4b" Dec 16 12:25:04.523826 kubelet[2747]: E1216 12:25:04.523757 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-859b8d64d8-9dh4b_calico-system(cbbe966c-0a8c-4af7-b5f1-2e4d5d293544)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-859b8d64d8-9dh4b_calico-system(cbbe966c-0a8c-4af7-b5f1-2e4d5d293544)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e347631fb1ad8e4c47f231fe727a1406692af32cc698cfe32502a339b8e497b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-859b8d64d8-9dh4b" podUID="cbbe966c-0a8c-4af7-b5f1-2e4d5d293544" Dec 16 12:25:04.527353 containerd[1581]: time="2025-12-16T12:25:04.527206428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25l4t,Uid:3b16c732-4418-4cfc-b3b1-1b82c89afd86,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.527647 kubelet[2747]: E1216 12:25:04.527583 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.527716 kubelet[2747]: E1216 12:25:04.527670 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-25l4t" Dec 16 12:25:04.527716 kubelet[2747]: E1216 12:25:04.527703 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-25l4t" Dec 16 12:25:04.527800 kubelet[2747]: E1216 12:25:04.527760 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-25l4t_kube-system(3b16c732-4418-4cfc-b3b1-1b82c89afd86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-25l4t_kube-system(3b16c732-4418-4cfc-b3b1-1b82c89afd86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1db7f3c152e46f7829f2642d5644f899a24ba38cce139b36c6b722825e75856\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-25l4t" podUID="3b16c732-4418-4cfc-b3b1-1b82c89afd86" Dec 16 12:25:04.528835 containerd[1581]: time="2025-12-16T12:25:04.528708888Z" level=error msg="Failed to destroy network for sandbox \"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.529880 containerd[1581]: time="2025-12-16T12:25:04.529820401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9dd2,Uid:c186cd40-d4dc-48c3-8fe5-5af674baa410,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.530800 containerd[1581]: time="2025-12-16T12:25:04.530677869Z" level=error msg="Failed to destroy network for sandbox \"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.530884 kubelet[2747]: E1216 12:25:04.530472 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.530884 kubelet[2747]: E1216 12:25:04.530538 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.530884 kubelet[2747]: E1216 12:25:04.530557 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n9dd2" Dec 16 12:25:04.531966 kubelet[2747]: E1216 12:25:04.530599 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-n9dd2_calico-system(c186cd40-d4dc-48c3-8fe5-5af674baa410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-n9dd2_calico-system(c186cd40-d4dc-48c3-8fe5-5af674baa410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc06a4706a58bb4c5ccb163a4c88ff5ba2584776585a8337cd89c2c88f64e337\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:04.531966 kubelet[2747]: E1216 12:25:04.531936 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.532089 containerd[1581]: time="2025-12-16T12:25:04.531516214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p6dhb,Uid:8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.532133 kubelet[2747]: E1216 12:25:04.531981 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p6dhb" Dec 16 12:25:04.532133 kubelet[2747]: E1216 12:25:04.531999 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p6dhb" Dec 16 12:25:04.532133 kubelet[2747]: E1216 12:25:04.532037 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p6dhb_kube-system(8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p6dhb_kube-system(8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4016c398f9a8208eb701e3ad7dd252f63661496d9dd82a95a489c47f9ad5398\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p6dhb" podUID="8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64" Dec 16 12:25:04.533675 containerd[1581]: time="2025-12-16T12:25:04.533620779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dccf794c6-mwtbf,Uid:7dc3261b-d36f-4639-8c3f-f9eff73dc960,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.533864 kubelet[2747]: E1216 12:25:04.533830 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.533921 kubelet[2747]: E1216 12:25:04.533881 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" Dec 16 12:25:04.534048 kubelet[2747]: E1216 12:25:04.534021 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" Dec 16 12:25:04.534134 kubelet[2747]: E1216 12:25:04.534104 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dccf794c6-mwtbf_calico-system(7dc3261b-d36f-4639-8c3f-f9eff73dc960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dccf794c6-mwtbf_calico-system(7dc3261b-d36f-4639-8c3f-f9eff73dc960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32fb28bbb54367bbd4124ceffeb47164da355977e6e0510ffd07ee7022b8112c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:04.540016 containerd[1581]: time="2025-12-16T12:25:04.539846537Z" level=error msg="Failed to destroy network for sandbox \"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.544342 containerd[1581]: time="2025-12-16T12:25:04.544072069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-dz9mw,Uid:9f50ced9-6722-4b8a-92ec-e6e3732665dc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.544764 kubelet[2747]: E1216 12:25:04.544721 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:04.544826 kubelet[2747]: E1216 12:25:04.544772 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" Dec 16 12:25:04.544826 kubelet[2747]: E1216 12:25:04.544790 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" Dec 16 12:25:04.544991 kubelet[2747]: E1216 12:25:04.544856 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67754b54bf-dz9mw_calico-apiserver(9f50ced9-6722-4b8a-92ec-e6e3732665dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67754b54bf-dz9mw_calico-apiserver(9f50ced9-6722-4b8a-92ec-e6e3732665dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fc99977c88eb20e0b7fb040d0871450482d993d604726e30b966e6185fea937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:04.836245 kubelet[2747]: E1216 12:25:04.836212 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:04.837940 containerd[1581]: time="2025-12-16T12:25:04.837861741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:25:05.737543 systemd[1]: Created slice kubepods-besteffort-pod179aa3f5_01af_4f0c_91ba_27b0e8267d2b.slice - libcontainer container kubepods-besteffort-pod179aa3f5_01af_4f0c_91ba_27b0e8267d2b.slice. Dec 16 12:25:05.745737 containerd[1581]: time="2025-12-16T12:25:05.744677006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ndhz8,Uid:179aa3f5-01af-4f0c-91ba-27b0e8267d2b,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:05.830453 containerd[1581]: time="2025-12-16T12:25:05.830318004Z" level=error msg="Failed to destroy network for sandbox \"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:05.832820 systemd[1]: run-netns-cni\x2d02420c53\x2d3fe8\x2dec13\x2dedaa\x2d964a53a33c4d.mount: Deactivated successfully. Dec 16 12:25:05.834592 containerd[1581]: time="2025-12-16T12:25:05.834515226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ndhz8,Uid:179aa3f5-01af-4f0c-91ba-27b0e8267d2b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:05.836248 kubelet[2747]: E1216 12:25:05.835195 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:25:05.836248 kubelet[2747]: E1216 12:25:05.835338 2747 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:25:05.836248 kubelet[2747]: E1216 12:25:05.835364 2747 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ndhz8" Dec 16 12:25:05.836672 kubelet[2747]: E1216 12:25:05.835424 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8bf68eb9ecfba9a9102e72be9fd476886dc3631bf22c9c703acf99f4a10976e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:08.845412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051558573.mount: Deactivated successfully. Dec 16 12:25:09.077402 containerd[1581]: time="2025-12-16T12:25:09.077272858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:09.080095 containerd[1581]: time="2025-12-16T12:25:09.080002099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:25:09.082523 containerd[1581]: time="2025-12-16T12:25:09.082436176Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:09.087812 containerd[1581]: time="2025-12-16T12:25:09.087727473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:09.088591 containerd[1581]: time="2025-12-16T12:25:09.088368087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.250460058s" Dec 16 12:25:09.088591 containerd[1581]: time="2025-12-16T12:25:09.088427016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:25:09.113624 containerd[1581]: time="2025-12-16T12:25:09.113474613Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:25:09.129729 containerd[1581]: time="2025-12-16T12:25:09.129674231Z" level=info msg="Container 2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:09.155069 containerd[1581]: time="2025-12-16T12:25:09.155017392Z" level=info msg="CreateContainer within sandbox \"7fe4df206c9ef675d9549f68d40b3fd1fbc12b57392267ac50e9daf17135d3d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510\"" Dec 16 12:25:09.156118 containerd[1581]: time="2025-12-16T12:25:09.156028180Z" level=info msg="StartContainer for \"2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510\"" Dec 16 12:25:09.160155 containerd[1581]: time="2025-12-16T12:25:09.160085736Z" level=info msg="connecting to shim 2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510" address="unix:///run/containerd/s/a7659594420ab78619e3c67413772330e3aa9b77741d45c6df93d0d492fcdd2b" protocol=ttrpc version=3 Dec 16 12:25:09.215274 systemd[1]: Started cri-containerd-2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510.scope - libcontainer container 2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510. Dec 16 12:25:09.286940 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 12:25:09.287071 kernel: audit: type=1334 audit(1765887909.282:571): prog-id=172 op=LOAD Dec 16 12:25:09.287097 kernel: audit: type=1300 audit(1765887909.282:571): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.282000 audit: BPF prog-id=172 op=LOAD Dec 16 12:25:09.282000 audit[3747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.293320 kernel: audit: type=1327 audit(1765887909.282:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.293417 kernel: audit: type=1334 audit(1765887909.283:572): prog-id=173 op=LOAD Dec 16 12:25:09.283000 audit: BPF prog-id=173 op=LOAD Dec 16 12:25:09.283000 audit[3747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.297491 kernel: audit: type=1300 audit(1765887909.283:572): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.297668 kernel: audit: type=1327 audit(1765887909.283:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.284000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:25:09.301745 kernel: audit: type=1334 audit(1765887909.284:573): prog-id=173 op=UNLOAD Dec 16 12:25:09.284000 audit[3747]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.305651 kernel: audit: type=1300 audit(1765887909.284:573): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.309334 kernel: audit: type=1327 audit(1765887909.284:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.284000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:25:09.310508 kernel: audit: type=1334 audit(1765887909.284:574): prog-id=172 op=UNLOAD Dec 16 12:25:09.284000 audit[3747]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.284000 audit: BPF prog-id=174 op=LOAD Dec 16 12:25:09.284000 audit[3747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3263 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:09.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265386633363935383831336463623933343862396232356533623233 Dec 16 12:25:09.328677 containerd[1581]: time="2025-12-16T12:25:09.328636402Z" level=info msg="StartContainer for \"2e8f36958813dcb9348b9b25e3b239dfd80917c59c18ab796c6bc35112d7e510\" returns successfully" Dec 16 12:25:09.493603 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:25:09.493817 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:25:09.769215 kubelet[2747]: I1216 12:25:09.769162 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-ca-bundle\") pod \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " Dec 16 12:25:09.770572 kubelet[2747]: I1216 12:25:09.770116 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-backend-key-pair\") pod \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " Dec 16 12:25:09.770972 kubelet[2747]: I1216 12:25:09.770950 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2kj\" (UniqueName: \"kubernetes.io/projected/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-kube-api-access-rn2kj\") pod \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\" (UID: \"cbbe966c-0a8c-4af7-b5f1-2e4d5d293544\") " Dec 16 12:25:09.792006 kubelet[2747]: I1216 12:25:09.791887 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-kube-api-access-rn2kj" (OuterVolumeSpecName: "kube-api-access-rn2kj") pod "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544" (UID: "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544"). InnerVolumeSpecName "kube-api-access-rn2kj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:25:09.799537 kubelet[2747]: I1216 12:25:09.799469 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544" (UID: "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:25:09.802623 kubelet[2747]: I1216 12:25:09.801634 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544" (UID: "cbbe966c-0a8c-4af7-b5f1-2e4d5d293544"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:25:09.846254 systemd[1]: var-lib-kubelet-pods-cbbe966c\x2d0a8c\x2d4af7\x2db5f1\x2d2e4d5d293544-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drn2kj.mount: Deactivated successfully. Dec 16 12:25:09.846376 systemd[1]: var-lib-kubelet-pods-cbbe966c\x2d0a8c\x2d4af7\x2db5f1\x2d2e4d5d293544-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:25:09.865210 kubelet[2747]: E1216 12:25:09.865149 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:09.871597 kubelet[2747]: I1216 12:25:09.871546 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rn2kj\" (UniqueName: \"kubernetes.io/projected/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-kube-api-access-rn2kj\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:09.871597 kubelet[2747]: I1216 12:25:09.871581 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:09.871597 kubelet[2747]: I1216 12:25:09.871593 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:09.875951 systemd[1]: Removed slice kubepods-besteffort-podcbbe966c_0a8c_4af7_b5f1_2e4d5d293544.slice - libcontainer container kubepods-besteffort-podcbbe966c_0a8c_4af7_b5f1_2e4d5d293544.slice. Dec 16 12:25:09.910063 kubelet[2747]: I1216 12:25:09.909968 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6wxbl" podStartSLOduration=1.8816838649999998 podStartE2EDuration="12.909950706s" podCreationTimestamp="2025-12-16 12:24:57 +0000 UTC" firstStartedPulling="2025-12-16 12:24:58.061524055 +0000 UTC m=+25.444256318" lastFinishedPulling="2025-12-16 12:25:09.089790936 +0000 UTC m=+36.472523159" observedRunningTime="2025-12-16 12:25:09.896378194 +0000 UTC m=+37.279110457" watchObservedRunningTime="2025-12-16 12:25:09.909950706 +0000 UTC m=+37.292683009" Dec 16 12:25:10.026631 systemd[1]: Created slice kubepods-besteffort-pod0f73049e_3478_4f3a_8d48_04802f1162ec.slice - libcontainer container kubepods-besteffort-pod0f73049e_3478_4f3a_8d48_04802f1162ec.slice. Dec 16 12:25:10.174740 kubelet[2747]: I1216 12:25:10.174681 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f73049e-3478-4f3a-8d48-04802f1162ec-whisker-backend-key-pair\") pod \"whisker-786b7dd598-6wh88\" (UID: \"0f73049e-3478-4f3a-8d48-04802f1162ec\") " pod="calico-system/whisker-786b7dd598-6wh88" Dec 16 12:25:10.174740 kubelet[2747]: I1216 12:25:10.174743 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f73049e-3478-4f3a-8d48-04802f1162ec-whisker-ca-bundle\") pod \"whisker-786b7dd598-6wh88\" (UID: \"0f73049e-3478-4f3a-8d48-04802f1162ec\") " pod="calico-system/whisker-786b7dd598-6wh88" Dec 16 12:25:10.174972 kubelet[2747]: I1216 12:25:10.174772 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6tk\" (UniqueName: \"kubernetes.io/projected/0f73049e-3478-4f3a-8d48-04802f1162ec-kube-api-access-9w6tk\") pod \"whisker-786b7dd598-6wh88\" (UID: \"0f73049e-3478-4f3a-8d48-04802f1162ec\") " pod="calico-system/whisker-786b7dd598-6wh88" Dec 16 12:25:10.330813 containerd[1581]: time="2025-12-16T12:25:10.330683248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786b7dd598-6wh88,Uid:0f73049e-3478-4f3a-8d48-04802f1162ec,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:10.557960 systemd-networkd[1489]: cali6689c8f6141: Link UP Dec 16 12:25:10.558675 systemd-networkd[1489]: cali6689c8f6141: Gained carrier Dec 16 12:25:10.578855 containerd[1581]: 2025-12-16 12:25:10.360 [INFO][3818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:25:10.578855 containerd[1581]: 2025-12-16 12:25:10.403 [INFO][3818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--786b7dd598--6wh88-eth0 whisker-786b7dd598- calico-system 0f73049e-3478-4f3a-8d48-04802f1162ec 905 0 2025-12-16 12:25:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:786b7dd598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-786b7dd598-6wh88 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6689c8f6141 [] [] }} ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-" Dec 16 12:25:10.578855 containerd[1581]: 2025-12-16 12:25:10.403 [INFO][3818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.578855 containerd[1581]: 2025-12-16 12:25:10.496 [INFO][3831] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" HandleID="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Workload="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.497 [INFO][3831] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" HandleID="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Workload="localhost-k8s-whisker--786b7dd598--6wh88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000583150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-786b7dd598-6wh88", "timestamp":"2025-12-16 12:25:10.496656137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.497 [INFO][3831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.497 [INFO][3831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.497 [INFO][3831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.509 [INFO][3831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" host="localhost" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.519 [INFO][3831] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.526 [INFO][3831] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.529 [INFO][3831] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.532 [INFO][3831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:10.580219 containerd[1581]: 2025-12-16 12:25:10.532 [INFO][3831] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" host="localhost" Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.534 [INFO][3831] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79 Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.539 [INFO][3831] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" host="localhost" Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.547 [INFO][3831] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" host="localhost" Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.547 [INFO][3831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" host="localhost" Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.548 [INFO][3831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:10.580456 containerd[1581]: 2025-12-16 12:25:10.548 [INFO][3831] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" HandleID="k8s-pod-network.8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Workload="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.580797 containerd[1581]: 2025-12-16 12:25:10.550 [INFO][3818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786b7dd598--6wh88-eth0", GenerateName:"whisker-786b7dd598-", Namespace:"calico-system", SelfLink:"", UID:"0f73049e-3478-4f3a-8d48-04802f1162ec", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786b7dd598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-786b7dd598-6wh88", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6689c8f6141", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:10.580797 containerd[1581]: 2025-12-16 12:25:10.550 [INFO][3818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.580981 containerd[1581]: 2025-12-16 12:25:10.550 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6689c8f6141 ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.580981 containerd[1581]: 2025-12-16 12:25:10.559 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.581063 containerd[1581]: 2025-12-16 12:25:10.560 [INFO][3818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786b7dd598--6wh88-eth0", GenerateName:"whisker-786b7dd598-", Namespace:"calico-system", SelfLink:"", UID:"0f73049e-3478-4f3a-8d48-04802f1162ec", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 25, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786b7dd598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79", Pod:"whisker-786b7dd598-6wh88", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6689c8f6141", MAC:"5a:a2:71:e8:05:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:10.581129 containerd[1581]: 2025-12-16 12:25:10.574 [INFO][3818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" Namespace="calico-system" Pod="whisker-786b7dd598-6wh88" WorkloadEndpoint="localhost-k8s-whisker--786b7dd598--6wh88-eth0" Dec 16 12:25:10.631537 containerd[1581]: time="2025-12-16T12:25:10.630983437Z" level=info msg="connecting to shim 8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79" address="unix:///run/containerd/s/1bc48596d7fe9274b184f1a2b30570c792be5d41d91daabcc8b744339b406517" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:10.665264 systemd[1]: Started cri-containerd-8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79.scope - libcontainer container 8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79. Dec 16 12:25:10.679000 audit: BPF prog-id=175 op=LOAD Dec 16 12:25:10.680000 audit: BPF prog-id=176 op=LOAD Dec 16 12:25:10.680000 audit[3865]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.680000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:25:10.680000 audit[3865]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.681000 audit: BPF prog-id=177 op=LOAD Dec 16 12:25:10.681000 audit[3865]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.681000 audit: BPF prog-id=178 op=LOAD Dec 16 12:25:10.681000 audit[3865]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.682000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:25:10.682000 audit[3865]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.682000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:25:10.682000 audit[3865]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.682000 audit: BPF prog-id=179 op=LOAD Dec 16 12:25:10.682000 audit[3865]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3854 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:10.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866393365343361656236633433626532396439313963313632663435 Dec 16 12:25:10.684884 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:10.727479 containerd[1581]: time="2025-12-16T12:25:10.727435821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786b7dd598-6wh88,Uid:0f73049e-3478-4f3a-8d48-04802f1162ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f93e43aeb6c43be29d919c162f45ae860bcc200bd2c39455c504a0f93efca79\"" Dec 16 12:25:10.730919 containerd[1581]: time="2025-12-16T12:25:10.730825664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:25:10.732153 kubelet[2747]: I1216 12:25:10.732121 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbbe966c-0a8c-4af7-b5f1-2e4d5d293544" path="/var/lib/kubelet/pods/cbbe966c-0a8c-4af7-b5f1-2e4d5d293544/volumes" Dec 16 12:25:10.868259 kubelet[2747]: I1216 12:25:10.868150 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:25:10.869215 kubelet[2747]: E1216 12:25:10.869115 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:11.009141 containerd[1581]: time="2025-12-16T12:25:11.009088000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:11.169927 containerd[1581]: time="2025-12-16T12:25:11.169776485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:25:11.169927 containerd[1581]: time="2025-12-16T12:25:11.169844054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:11.170123 kubelet[2747]: E1216 12:25:11.170055 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:11.172413 kubelet[2747]: E1216 12:25:11.172357 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:11.182265 kubelet[2747]: E1216 12:25:11.182190 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:68ad85c0c4be4b809ac5804e8fb5f9e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:11.184278 containerd[1581]: time="2025-12-16T12:25:11.184232486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:25:11.462983 containerd[1581]: time="2025-12-16T12:25:11.462764685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:11.467839 containerd[1581]: time="2025-12-16T12:25:11.467781979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:25:11.468166 containerd[1581]: time="2025-12-16T12:25:11.467920799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:11.468226 kubelet[2747]: E1216 12:25:11.468182 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:11.468275 kubelet[2747]: E1216 12:25:11.468239 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:11.468447 kubelet[2747]: E1216 12:25:11.468396 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:11.470603 kubelet[2747]: E1216 12:25:11.470517 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:25:11.880468 kubelet[2747]: E1216 12:25:11.879559 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:25:11.921000 audit[3996]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=3996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:11.921000 audit[3996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd663e270 a2=0 a3=1 items=0 ppid=2883 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:11.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:11.932000 audit[3996]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=3996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:11.932000 audit[3996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd663e270 a2=0 a3=1 items=0 ppid=2883 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:11.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:11.985128 systemd-networkd[1489]: cali6689c8f6141: Gained IPv6LL Dec 16 12:25:14.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.36:22-10.0.0.1:39562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.682624 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:39562.service - OpenSSH per-connection server daemon (10.0.0.1:39562). Dec 16 12:25:14.683444 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 12:25:14.683489 kernel: audit: type=1130 audit(1765887914.681:586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.36:22-10.0.0.1:39562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:14.731010 containerd[1581]: time="2025-12-16T12:25:14.730969222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-t5zll,Uid:1414ae41-c2cb-4936-90b7-c8854a1bb586,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:25:14.731507 containerd[1581]: time="2025-12-16T12:25:14.731337389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9dd2,Uid:c186cd40-d4dc-48c3-8fe5-5af674baa410,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:14.776170 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 39562 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:14.774000 audit[4071]: USER_ACCT pid=4071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.778239 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:14.776000 audit[4071]: CRED_ACQ pid=4071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.783507 kernel: audit: type=1101 audit(1765887914.774:587): pid=4071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.783628 kernel: audit: type=1103 audit(1765887914.776:588): pid=4071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.786113 kernel: audit: type=1006 audit(1765887914.776:589): pid=4071 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 16 12:25:14.776000 audit[4071]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfe91b60 a2=3 a3=0 items=0 ppid=1 pid=4071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:14.789950 kernel: audit: type=1300 audit(1765887914.776:589): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfe91b60 a2=3 a3=0 items=0 ppid=1 pid=4071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:14.776000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:14.791469 kernel: audit: type=1327 audit(1765887914.776:589): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:14.796676 systemd-logind[1558]: New session 8 of user core. Dec 16 12:25:14.806365 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:25:14.808000 audit[4071]: USER_START pid=4071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.817943 kernel: audit: type=1105 audit(1765887914.808:590): pid=4071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.818071 kernel: audit: type=1103 audit(1765887914.815:591): pid=4101 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.815000 audit[4101]: CRED_ACQ pid=4101 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:14.919226 systemd-networkd[1489]: cali60aed389235: Link UP Dec 16 12:25:14.919749 systemd-networkd[1489]: cali60aed389235: Gained carrier Dec 16 12:25:14.936809 containerd[1581]: 2025-12-16 12:25:14.791 [INFO][4075] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:25:14.936809 containerd[1581]: 2025-12-16 12:25:14.815 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0 calico-apiserver-67754b54bf- calico-apiserver 1414ae41-c2cb-4936-90b7-c8854a1bb586 837 0 2025-12-16 12:24:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67754b54bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67754b54bf-t5zll eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali60aed389235 [] [] }} ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-" Dec 16 12:25:14.936809 containerd[1581]: 2025-12-16 12:25:14.815 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.936809 containerd[1581]: 2025-12-16 12:25:14.857 [INFO][4105] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" HandleID="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Workload="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.858 [INFO][4105] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" HandleID="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Workload="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a07c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67754b54bf-t5zll", "timestamp":"2025-12-16 12:25:14.857800902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.858 [INFO][4105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.858 [INFO][4105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.858 [INFO][4105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.872 [INFO][4105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" host="localhost" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.882 [INFO][4105] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.889 [INFO][4105] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.893 [INFO][4105] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.896 [INFO][4105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:14.937078 containerd[1581]: 2025-12-16 12:25:14.896 [INFO][4105] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" host="localhost" Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.899 [INFO][4105] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83 Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.904 [INFO][4105] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" host="localhost" Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4105] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" host="localhost" Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" host="localhost" Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:14.937280 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4105] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" HandleID="k8s-pod-network.ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Workload="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.937396 containerd[1581]: 2025-12-16 12:25:14.913 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0", GenerateName:"calico-apiserver-67754b54bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1414ae41-c2cb-4936-90b7-c8854a1bb586", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67754b54bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67754b54bf-t5zll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60aed389235", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:14.937443 containerd[1581]: 2025-12-16 12:25:14.915 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.937443 containerd[1581]: 2025-12-16 12:25:14.915 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60aed389235 ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.937443 containerd[1581]: 2025-12-16 12:25:14.920 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.937533 containerd[1581]: 2025-12-16 12:25:14.921 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0", GenerateName:"calico-apiserver-67754b54bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1414ae41-c2cb-4936-90b7-c8854a1bb586", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67754b54bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83", Pod:"calico-apiserver-67754b54bf-t5zll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60aed389235", MAC:"7e:b1:75:2e:bc:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:14.937600 containerd[1581]: 2025-12-16 12:25:14.932 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-t5zll" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--t5zll-eth0" Dec 16 12:25:14.972970 containerd[1581]: time="2025-12-16T12:25:14.972353414Z" level=info msg="connecting to shim ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83" address="unix:///run/containerd/s/5ed23c0e7cf0ee48ca2cebd14e190902d82e7ced083b0a12d68a5bd1eafc54cf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:15.009214 systemd[1]: Started cri-containerd-ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83.scope - libcontainer container ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83. Dec 16 12:25:15.024000 audit: BPF prog-id=180 op=LOAD Dec 16 12:25:15.026943 kernel: audit: type=1334 audit(1765887915.024:592): prog-id=180 op=LOAD Dec 16 12:25:15.025000 audit: BPF prog-id=181 op=LOAD Dec 16 12:25:15.025000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.026000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:25:15.026000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.029023 kernel: audit: type=1334 audit(1765887915.025:593): prog-id=181 op=LOAD Dec 16 12:25:15.028976 systemd-networkd[1489]: cali18d30a7bf66: Link UP Dec 16 12:25:15.026000 audit: BPF prog-id=182 op=LOAD Dec 16 12:25:15.026000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.028000 audit: BPF prog-id=183 op=LOAD Dec 16 12:25:15.028000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.028000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:25:15.028000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.028000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:25:15.028000 audit[4159]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.028000 audit: BPF prog-id=184 op=LOAD Dec 16 12:25:15.028000 audit[4159]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333330353030383834616234623164383730616636643862653232 Dec 16 12:25:15.029271 systemd-networkd[1489]: cali18d30a7bf66: Gained carrier Dec 16 12:25:15.031135 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:15.040350 sshd[4101]: Connection closed by 10.0.0.1 port 39562 Dec 16 12:25:15.040764 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:15.042000 audit[4071]: USER_END pid=4071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:15.043000 audit[4071]: CRED_DISP pid=4071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:15.048374 containerd[1581]: 2025-12-16 12:25:14.808 [INFO][4080] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:25:15.048374 containerd[1581]: 2025-12-16 12:25:14.830 [INFO][4080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--n9dd2-eth0 goldmane-666569f655- calico-system c186cd40-d4dc-48c3-8fe5-5af674baa410 842 0 2025-12-16 12:24:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-n9dd2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18d30a7bf66 [] [] }} ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-" Dec 16 12:25:15.048374 containerd[1581]: 2025-12-16 12:25:14.830 [INFO][4080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.048374 containerd[1581]: 2025-12-16 12:25:14.868 [INFO][4115] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" HandleID="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Workload="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.868 [INFO][4115] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" HandleID="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Workload="localhost-k8s-goldmane--666569f655--n9dd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-n9dd2", "timestamp":"2025-12-16 12:25:14.868158665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.868 [INFO][4115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.910 [INFO][4115] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.971 [INFO][4115] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" host="localhost" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.983 [INFO][4115] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.994 [INFO][4115] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:14.997 [INFO][4115] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:15.004 [INFO][4115] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:15.048622 containerd[1581]: 2025-12-16 12:25:15.004 [INFO][4115] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" host="localhost" Dec 16 12:25:15.048453 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:39562.service: Deactivated successfully. Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.007 [INFO][4115] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.012 [INFO][4115] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" host="localhost" Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.020 [INFO][4115] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" host="localhost" Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.020 [INFO][4115] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" host="localhost" Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.021 [INFO][4115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:15.049159 containerd[1581]: 2025-12-16 12:25:15.022 [INFO][4115] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" HandleID="k8s-pod-network.026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Workload="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.049295 containerd[1581]: 2025-12-16 12:25:15.025 [INFO][4080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--n9dd2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c186cd40-d4dc-48c3-8fe5-5af674baa410", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-n9dd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18d30a7bf66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:15.049295 containerd[1581]: 2025-12-16 12:25:15.025 [INFO][4080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.049458 containerd[1581]: 2025-12-16 12:25:15.026 [INFO][4080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18d30a7bf66 ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.049458 containerd[1581]: 2025-12-16 12:25:15.030 [INFO][4080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.049533 containerd[1581]: 2025-12-16 12:25:15.031 [INFO][4080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--n9dd2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c186cd40-d4dc-48c3-8fe5-5af674baa410", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d", Pod:"goldmane-666569f655-n9dd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18d30a7bf66", MAC:"ce:2e:3f:ff:bf:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:15.049612 containerd[1581]: 2025-12-16 12:25:15.045 [INFO][4080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" Namespace="calico-system" Pod="goldmane-666569f655-n9dd2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--n9dd2-eth0" Dec 16 12:25:15.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.36:22-10.0.0.1:39562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:15.052234 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:25:15.054592 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:25:15.056269 systemd-logind[1558]: Removed session 8. Dec 16 12:25:15.076113 containerd[1581]: time="2025-12-16T12:25:15.076068826Z" level=info msg="connecting to shim 026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d" address="unix:///run/containerd/s/94b67611c41da165b344f98cdefc449b926ad7d2cec8e5ea346bb82dea27a570" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:15.090966 containerd[1581]: time="2025-12-16T12:25:15.090898514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-t5zll,Uid:1414ae41-c2cb-4936-90b7-c8854a1bb586,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ae330500884ab4b1d870af6d8be22496552e021f5a4a52155546165b23a56a83\"" Dec 16 12:25:15.094234 containerd[1581]: time="2025-12-16T12:25:15.094159760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:15.108217 systemd[1]: Started cri-containerd-026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d.scope - libcontainer container 026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d. Dec 16 12:25:15.118000 audit: BPF prog-id=185 op=LOAD Dec 16 12:25:15.119000 audit: BPF prog-id=186 op=LOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=187 op=LOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=188 op=LOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.119000 audit: BPF prog-id=189 op=LOAD Dec 16 12:25:15.119000 audit[4214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4204 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032366330653162346632386331336237663262613931393334333766 Dec 16 12:25:15.121181 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:15.157198 containerd[1581]: time="2025-12-16T12:25:15.157156370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9dd2,Uid:c186cd40-d4dc-48c3-8fe5-5af674baa410,Namespace:calico-system,Attempt:0,} returns sandbox id \"026c0e1b4f28c13b7f2ba9193437f9e70d1186e8db3bd9ae3bd8dc441ca14c5d\"" Dec 16 12:25:15.373532 containerd[1581]: time="2025-12-16T12:25:15.373472523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:15.375958 containerd[1581]: time="2025-12-16T12:25:15.375875262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:15.376074 containerd[1581]: time="2025-12-16T12:25:15.376000318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:15.376261 kubelet[2747]: E1216 12:25:15.376210 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:15.376921 kubelet[2747]: E1216 12:25:15.376279 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:15.376921 kubelet[2747]: E1216 12:25:15.376553 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-t5zll_calico-apiserver(1414ae41-c2cb-4936-90b7-c8854a1bb586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:15.377415 containerd[1581]: time="2025-12-16T12:25:15.377140340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:25:15.377920 kubelet[2747]: E1216 12:25:15.377820 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:15.598597 containerd[1581]: time="2025-12-16T12:25:15.598509162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:15.613142 containerd[1581]: time="2025-12-16T12:25:15.613043573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:15.613278 containerd[1581]: time="2025-12-16T12:25:15.613061615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:25:15.613787 kubelet[2747]: E1216 12:25:15.613745 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:15.613921 kubelet[2747]: E1216 12:25:15.613799 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:15.614072 kubelet[2747]: E1216 12:25:15.613996 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9dd2_calico-system(c186cd40-d4dc-48c3-8fe5-5af674baa410): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:15.615815 kubelet[2747]: E1216 12:25:15.615761 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:15.729281 kubelet[2747]: E1216 12:25:15.729152 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:15.730079 containerd[1581]: time="2025-12-16T12:25:15.730032950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p6dhb,Uid:8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:15.845202 systemd-networkd[1489]: cali5baf6ce852c: Link UP Dec 16 12:25:15.846232 systemd-networkd[1489]: cali5baf6ce852c: Gained carrier Dec 16 12:25:15.860066 containerd[1581]: 2025-12-16 12:25:15.755 [INFO][4262] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:25:15.860066 containerd[1581]: 2025-12-16 12:25:15.770 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0 coredns-674b8bbfcf- kube-system 8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64 834 0 2025-12-16 12:24:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-p6dhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5baf6ce852c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-" Dec 16 12:25:15.860066 containerd[1581]: 2025-12-16 12:25:15.771 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.860066 containerd[1581]: 2025-12-16 12:25:15.797 [INFO][4277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" HandleID="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Workload="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.797 [INFO][4277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" HandleID="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Workload="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-p6dhb", "timestamp":"2025-12-16 12:25:15.797195438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.797 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.797 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.797 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.807 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" host="localhost" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.813 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.819 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.822 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.825 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:15.860562 containerd[1581]: 2025-12-16 12:25:15.825 [INFO][4277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" host="localhost" Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.827 [INFO][4277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091 Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.833 [INFO][4277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" host="localhost" Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.840 [INFO][4277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" host="localhost" Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.840 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" host="localhost" Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.840 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:15.860834 containerd[1581]: 2025-12-16 12:25:15.840 [INFO][4277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" HandleID="k8s-pod-network.a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Workload="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.860996 containerd[1581]: 2025-12-16 12:25:15.842 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-p6dhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf6ce852c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:15.861055 containerd[1581]: 2025-12-16 12:25:15.843 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.861055 containerd[1581]: 2025-12-16 12:25:15.843 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5baf6ce852c ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.861055 containerd[1581]: 2025-12-16 12:25:15.845 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.861132 containerd[1581]: 2025-12-16 12:25:15.845 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091", Pod:"coredns-674b8bbfcf-p6dhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf6ce852c", MAC:"d6:93:41:b6:90:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:15.861132 containerd[1581]: 2025-12-16 12:25:15.857 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" Namespace="kube-system" Pod="coredns-674b8bbfcf-p6dhb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p6dhb-eth0" Dec 16 12:25:15.892539 kubelet[2747]: E1216 12:25:15.892338 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:15.896121 kubelet[2747]: E1216 12:25:15.896056 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:15.904937 containerd[1581]: time="2025-12-16T12:25:15.904154005Z" level=info msg="connecting to shim a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091" address="unix:///run/containerd/s/59545032ec0c8521f24798ce26cf240c20b90166b6a7878fed1e19ba51b0cfeb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:15.939000 audit[4318]: NETFILTER_CFG table=filter:123 family=2 entries=22 op=nft_register_rule pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:15.939000 audit[4318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff3e9f520 a2=0 a3=1 items=0 ppid=2883 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:15.946228 systemd[1]: Started cri-containerd-a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091.scope - libcontainer container a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091. Dec 16 12:25:15.948000 audit[4318]: NETFILTER_CFG table=nat:124 family=2 entries=12 op=nft_register_rule pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:15.948000 audit[4318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff3e9f520 a2=0 a3=1 items=0 ppid=2883 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:15.959000 audit: BPF prog-id=190 op=LOAD Dec 16 12:25:15.960000 audit: BPF prog-id=191 op=LOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=192 op=LOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=193 op=LOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.960000 audit: BPF prog-id=194 op=LOAD Dec 16 12:25:15.960000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4298 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383361373134346265326263626535366565343461353837363833 Dec 16 12:25:15.962969 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:15.965000 audit[4335]: NETFILTER_CFG table=filter:125 family=2 entries=22 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:15.965000 audit[4335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc9858790 a2=0 a3=1 items=0 ppid=2883 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:15.973000 audit[4335]: NETFILTER_CFG table=nat:126 family=2 entries=12 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:15.973000 audit[4335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9858790 a2=0 a3=1 items=0 ppid=2883 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:15.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:15.998946 containerd[1581]: time="2025-12-16T12:25:15.998820121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p6dhb,Uid:8b94d2bc-c5a4-45dc-8d85-f25fcbe8bb64,Namespace:kube-system,Attempt:0,} returns sandbox id \"a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091\"" Dec 16 12:25:16.000734 kubelet[2747]: E1216 12:25:16.000543 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:16.007017 containerd[1581]: time="2025-12-16T12:25:16.006627757Z" level=info msg="CreateContainer within sandbox \"a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:25:16.017812 containerd[1581]: time="2025-12-16T12:25:16.017764872Z" level=info msg="Container 70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:16.025871 containerd[1581]: time="2025-12-16T12:25:16.025813451Z" level=info msg="CreateContainer within sandbox \"a583a7144be2bcbe56ee44a587683e6b8e153df4c558a34fe200181b44ee6091\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901\"" Dec 16 12:25:16.026577 containerd[1581]: time="2025-12-16T12:25:16.026414844Z" level=info msg="StartContainer for \"70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901\"" Dec 16 12:25:16.027927 containerd[1581]: time="2025-12-16T12:25:16.027838938Z" level=info msg="connecting to shim 70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901" address="unix:///run/containerd/s/59545032ec0c8521f24798ce26cf240c20b90166b6a7878fed1e19ba51b0cfeb" protocol=ttrpc version=3 Dec 16 12:25:16.054183 systemd[1]: Started cri-containerd-70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901.scope - libcontainer container 70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901. Dec 16 12:25:16.064000 audit: BPF prog-id=195 op=LOAD Dec 16 12:25:16.065000 audit: BPF prog-id=196 op=LOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=197 op=LOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=198 op=LOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.065000 audit: BPF prog-id=199 op=LOAD Dec 16 12:25:16.065000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4298 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643737623335386564336333623133653133356263643561353638 Dec 16 12:25:16.083904 containerd[1581]: time="2025-12-16T12:25:16.083747860Z" level=info msg="StartContainer for \"70d77b358ed3c3b13e135bcd5a5686a35f741ecb4ff79e5204367022f7908901\" returns successfully" Dec 16 12:25:16.145165 systemd-networkd[1489]: cali60aed389235: Gained IPv6LL Dec 16 12:25:16.156207 kubelet[2747]: I1216 12:25:16.156091 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:25:16.156791 kubelet[2747]: E1216 12:25:16.156518 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:16.371000 audit: BPF prog-id=200 op=LOAD Dec 16 12:25:16.371000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf2e01a8 a2=98 a3=ffffcf2e0198 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.371000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:25:16.371000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf2e0178 a3=0 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.372000 audit: BPF prog-id=201 op=LOAD Dec 16 12:25:16.372000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf2e0058 a2=74 a3=95 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.372000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:25:16.372000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.372000 audit: BPF prog-id=202 op=LOAD Dec 16 12:25:16.372000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf2e0088 a2=40 a3=ffffcf2e00b8 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.372000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:25:16.372000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcf2e00b8 items=0 ppid=4377 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.372000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:25:16.374000 audit: BPF prog-id=203 op=LOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe0a8b4a8 a2=98 a3=ffffe0a8b498 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.374000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe0a8b478 a3=0 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.374000 audit: BPF prog-id=204 op=LOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0a8b138 a2=74 a3=95 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.374000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.374000 audit: BPF prog-id=205 op=LOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0a8b198 a2=94 a3=2 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.374000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:25:16.374000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.374000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.481000 audit: BPF prog-id=206 op=LOAD Dec 16 12:25:16.481000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0a8b158 a2=40 a3=ffffe0a8b188 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.481000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:25:16.481000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe0a8b188 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.493000 audit: BPF prog-id=207 op=LOAD Dec 16 12:25:16.493000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0a8b168 a2=94 a3=4 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.493000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.493000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:25:16.493000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.493000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.493000 audit: BPF prog-id=208 op=LOAD Dec 16 12:25:16.493000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe0a8afa8 a2=94 a3=5 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.493000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.493000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:25:16.493000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.493000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.493000 audit: BPF prog-id=209 op=LOAD Dec 16 12:25:16.493000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0a8b1d8 a2=94 a3=6 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.493000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.494000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:25:16.494000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.494000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.494000 audit: BPF prog-id=210 op=LOAD Dec 16 12:25:16.494000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0a8a9a8 a2=94 a3=83 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.494000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.494000 audit: BPF prog-id=211 op=LOAD Dec 16 12:25:16.494000 audit[4396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe0a8a768 a2=94 a3=2 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.494000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.494000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:25:16.494000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.494000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.495000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:25:16.495000 audit[4396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1c5ab620 a3=1c59eb00 items=0 ppid=4377 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.495000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:25:16.511000 audit: BPF prog-id=212 op=LOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd90f9d18 a2=98 a3=ffffd90f9d08 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.511000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd90f9ce8 a3=0 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.511000 audit: BPF prog-id=213 op=LOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd90f9bc8 a2=74 a3=95 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.511000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.511000 audit: BPF prog-id=214 op=LOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd90f9bf8 a2=40 a3=ffffd90f9c28 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.511000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:25:16.511000 audit[4419]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd90f9c28 items=0 ppid=4377 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.511000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:25:16.602945 systemd-networkd[1489]: vxlan.calico: Link UP Dec 16 12:25:16.602956 systemd-networkd[1489]: vxlan.calico: Gained carrier Dec 16 12:25:16.634000 audit: BPF prog-id=215 op=LOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd64e5938 a2=98 a3=ffffd64e5928 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.634000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd64e5908 a3=0 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.634000 audit: BPF prog-id=216 op=LOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd64e5618 a2=74 a3=95 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.634000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.634000 audit: BPF prog-id=217 op=LOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd64e5678 a2=94 a3=2 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.634000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:25:16.634000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=218 op=LOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd64e54f8 a2=40 a3=ffffd64e5528 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd64e5528 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=219 op=LOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd64e5648 a2=94 a3=b7 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=220 op=LOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd64e4cf8 a2=94 a3=2 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.635000 audit: BPF prog-id=221 op=LOAD Dec 16 12:25:16.635000 audit[4465]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd64e4e88 a2=94 a3=30 items=0 ppid=4377 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.635000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:25:16.646000 audit: BPF prog-id=222 op=LOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe37fd88 a2=98 a3=fffffe37fd78 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.646000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffe37fd58 a3=0 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.646000 audit: BPF prog-id=223 op=LOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffe37fa18 a2=74 a3=95 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.646000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.646000 audit: BPF prog-id=224 op=LOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffe37fa78 a2=94 a3=2 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.646000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:25:16.646000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.729493 kubelet[2747]: E1216 12:25:16.729449 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:16.730834 containerd[1581]: time="2025-12-16T12:25:16.730725574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25l4t,Uid:3b16c732-4418-4cfc-b3b1-1b82c89afd86,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:16.741000 audit: BPF prog-id=225 op=LOAD Dec 16 12:25:16.741000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffe37fa38 a2=40 a3=fffffe37fa68 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.741000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.742000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:25:16.742000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffe37fa68 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.742000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.752000 audit: BPF prog-id=226 op=LOAD Dec 16 12:25:16.752000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffe37fa48 a2=94 a3=4 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.752000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:25:16.752000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.752000 audit: BPF prog-id=227 op=LOAD Dec 16 12:25:16.752000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffe37f888 a2=94 a3=5 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.752000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:25:16.752000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.753000 audit: BPF prog-id=228 op=LOAD Dec 16 12:25:16.753000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffe37fab8 a2=94 a3=6 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.753000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:25:16.753000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.753000 audit: BPF prog-id=229 op=LOAD Dec 16 12:25:16.753000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffe37f288 a2=94 a3=83 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.753000 audit: BPF prog-id=230 op=LOAD Dec 16 12:25:16.753000 audit[4474]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffe37f048 a2=94 a3=2 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.753000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:25:16.753000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.754000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:25:16.754000 audit[4474]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3c76e620 a3=3c761b00 items=0 ppid=4377 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:25:16.766000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:25:16.766000 audit[4377]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000c4f680 a2=0 a3=0 items=0 ppid=3890 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.766000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:25:16.831000 audit[4522]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:16.831000 audit[4522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe37750e0 a2=0 a3=ffffaeb3cfa8 items=0 ppid=4377 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.831000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:16.832000 audit[4523]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=4523 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:16.832000 audit[4523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc27cce00 a2=0 a3=ffffb21d0fa8 items=0 ppid=4377 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.832000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:16.843000 audit[4521]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=4521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:16.843000 audit[4521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffed1b2160 a2=0 a3=ffff9943cfa8 items=0 ppid=4377 pid=4521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:16.849000 audit[4525]: NETFILTER_CFG table=filter:130 family=2 entries=212 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:16.849000 audit[4525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=123208 a0=3 a1=ffffcf96f270 a2=0 a3=ffffb452ffa8 items=0 ppid=4377 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.849000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:16.884876 systemd-networkd[1489]: cali6a776e7811b: Link UP Dec 16 12:25:16.885716 systemd-networkd[1489]: cali6a776e7811b: Gained carrier Dec 16 12:25:16.900774 kubelet[2747]: E1216 12:25:16.900461 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:16.900774 kubelet[2747]: E1216 12:25:16.900690 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:16.901757 kubelet[2747]: E1216 12:25:16.901239 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:16.903023 kubelet[2747]: E1216 12:25:16.902991 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.792 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--25l4t-eth0 coredns-674b8bbfcf- kube-system 3b16c732-4418-4cfc-b3b1-1b82c89afd86 840 0 2025-12-16 12:24:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-25l4t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a776e7811b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.792 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.825 [INFO][4502] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" HandleID="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Workload="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.825 [INFO][4502] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" HandleID="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Workload="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-25l4t", "timestamp":"2025-12-16 12:25:16.825518986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.825 [INFO][4502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.825 [INFO][4502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.825 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.841 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.851 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.858 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.860 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.863 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.863 [INFO][4502] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.865 [INFO][4502] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991 Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.870 [INFO][4502] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.878 [INFO][4502] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.879 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" host="localhost" Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.879 [INFO][4502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:16.915960 containerd[1581]: 2025-12-16 12:25:16.879 [INFO][4502] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" HandleID="k8s-pod-network.7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Workload="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.882 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--25l4t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b16c732-4418-4cfc-b3b1-1b82c89afd86", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-25l4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a776e7811b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.882 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.882 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a776e7811b ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.885 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.887 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--25l4t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b16c732-4418-4cfc-b3b1-1b82c89afd86", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991", Pod:"coredns-674b8bbfcf-25l4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a776e7811b", MAC:"72:4e:e8:0f:89:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:16.917474 containerd[1581]: 2025-12-16 12:25:16.912 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" Namespace="kube-system" Pod="coredns-674b8bbfcf-25l4t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--25l4t-eth0" Dec 16 12:25:16.930000 audit[4545]: NETFILTER_CFG table=filter:131 family=2 entries=36 op=nft_register_chain pid=4545 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:16.930000 audit[4545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19176 a0=3 a1=ffffd99f4fe0 a2=0 a3=ffffa9706fa8 items=0 ppid=4377 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.930000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:16.938224 kubelet[2747]: I1216 12:25:16.938149 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p6dhb" podStartSLOduration=37.938131327 podStartE2EDuration="37.938131327s" podCreationTimestamp="2025-12-16 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:16.937105242 +0000 UTC m=+44.319837545" watchObservedRunningTime="2025-12-16 12:25:16.938131327 +0000 UTC m=+44.320863590" Dec 16 12:25:16.964128 containerd[1581]: time="2025-12-16T12:25:16.964074484Z" level=info msg="connecting to shim 7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991" address="unix:///run/containerd/s/9bca6ca051e903d722865668923cfefd06f13871a977addfd8c6cde1272b922b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:16.962000 audit[4557]: NETFILTER_CFG table=filter:132 family=2 entries=21 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:16.962000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd5042500 a2=0 a3=1 items=0 ppid=2883 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:16.968000 audit[4557]: NETFILTER_CFG table=nat:133 family=2 entries=19 op=nft_register_chain pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:16.968000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd5042500 a2=0 a3=1 items=0 ppid=2883 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:16.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:17.006197 systemd[1]: Started cri-containerd-7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991.scope - libcontainer container 7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991. Dec 16 12:25:17.016000 audit: BPF prog-id=231 op=LOAD Dec 16 12:25:17.017000 audit: BPF prog-id=232 op=LOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=233 op=LOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=234 op=LOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.017000 audit: BPF prog-id=235 op=LOAD Dec 16 12:25:17.017000 audit[4569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4556 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765356535353265353162313139316233623666366535653861336663 Dec 16 12:25:17.019918 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:17.042667 systemd-networkd[1489]: cali18d30a7bf66: Gained IPv6LL Dec 16 12:25:17.044940 containerd[1581]: time="2025-12-16T12:25:17.044824039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25l4t,Uid:3b16c732-4418-4cfc-b3b1-1b82c89afd86,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991\"" Dec 16 12:25:17.046019 kubelet[2747]: E1216 12:25:17.045964 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:17.051964 containerd[1581]: time="2025-12-16T12:25:17.051891960Z" level=info msg="CreateContainer within sandbox \"7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:25:17.078765 containerd[1581]: time="2025-12-16T12:25:17.078130920Z" level=info msg="Container 3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:17.101004 containerd[1581]: time="2025-12-16T12:25:17.100938632Z" level=info msg="CreateContainer within sandbox \"7e5e552e51b1191b3b6f6e5e8a3fc982c3ca8e3786533444a1e84ea51d9cb991\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb\"" Dec 16 12:25:17.101509 containerd[1581]: time="2025-12-16T12:25:17.101477976Z" level=info msg="StartContainer for \"3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb\"" Dec 16 12:25:17.102539 containerd[1581]: time="2025-12-16T12:25:17.102510739Z" level=info msg="connecting to shim 3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb" address="unix:///run/containerd/s/9bca6ca051e903d722865668923cfefd06f13871a977addfd8c6cde1272b922b" protocol=ttrpc version=3 Dec 16 12:25:17.124169 systemd[1]: Started cri-containerd-3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb.scope - libcontainer container 3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb. Dec 16 12:25:17.134000 audit: BPF prog-id=236 op=LOAD Dec 16 12:25:17.134000 audit: BPF prog-id=237 op=LOAD Dec 16 12:25:17.134000 audit[4594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.134000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:25:17.134000 audit[4594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.134000 audit: BPF prog-id=238 op=LOAD Dec 16 12:25:17.134000 audit[4594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.134000 audit: BPF prog-id=239 op=LOAD Dec 16 12:25:17.134000 audit[4594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.135000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:25:17.135000 audit[4594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.135000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:25:17.135000 audit[4594]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.135000 audit: BPF prog-id=240 op=LOAD Dec 16 12:25:17.135000 audit[4594]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4556 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362653130313535633334613561313430353939383764323031636465 Dec 16 12:25:17.153814 containerd[1581]: time="2025-12-16T12:25:17.153707467Z" level=info msg="StartContainer for \"3be10155c34a5a14059987d201cde63e253939a6c1ff580df73c134dc86073cb\" returns successfully" Dec 16 12:25:17.298121 systemd-networkd[1489]: cali5baf6ce852c: Gained IPv6LL Dec 16 12:25:17.729975 containerd[1581]: time="2025-12-16T12:25:17.729876500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-dz9mw,Uid:9f50ced9-6722-4b8a-92ec-e6e3732665dc,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:25:17.906690 kubelet[2747]: E1216 12:25:17.906651 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:17.908172 kubelet[2747]: E1216 12:25:17.908007 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:17.919052 systemd-networkd[1489]: calid7bc1664e10: Link UP Dec 16 12:25:17.921338 systemd-networkd[1489]: calid7bc1664e10: Gained carrier Dec 16 12:25:17.935106 kubelet[2747]: I1216 12:25:17.934989 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-25l4t" podStartSLOduration=38.934954726 podStartE2EDuration="38.934954726s" podCreationTimestamp="2025-12-16 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:17.933406942 +0000 UTC m=+45.316139245" watchObservedRunningTime="2025-12-16 12:25:17.934954726 +0000 UTC m=+45.317687029" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.792 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0 calico-apiserver-67754b54bf- calico-apiserver 9f50ced9-6722-4b8a-92ec-e6e3732665dc 841 0 2025-12-16 12:24:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67754b54bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67754b54bf-dz9mw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7bc1664e10 [] [] }} ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.794 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.847 [INFO][4642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" HandleID="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Workload="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.848 [INFO][4642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" HandleID="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Workload="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a36d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67754b54bf-dz9mw", "timestamp":"2025-12-16 12:25:17.847824525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.848 [INFO][4642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.848 [INFO][4642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.848 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.860 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.871 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.879 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.882 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.886 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.886 [INFO][4642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.889 [INFO][4642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.894 [INFO][4642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.906 [INFO][4642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.906 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" host="localhost" Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.906 [INFO][4642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:17.941091 containerd[1581]: 2025-12-16 12:25:17.906 [INFO][4642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" HandleID="k8s-pod-network.805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Workload="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.912 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0", GenerateName:"calico-apiserver-67754b54bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f50ced9-6722-4b8a-92ec-e6e3732665dc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67754b54bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67754b54bf-dz9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7bc1664e10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.913 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.913 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7bc1664e10 ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.921 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.921 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0", GenerateName:"calico-apiserver-67754b54bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f50ced9-6722-4b8a-92ec-e6e3732665dc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67754b54bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d", Pod:"calico-apiserver-67754b54bf-dz9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7bc1664e10", MAC:"26:a5:58:07:e8:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:17.942933 containerd[1581]: 2025-12-16 12:25:17.937 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" Namespace="calico-apiserver" Pod="calico-apiserver-67754b54bf-dz9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67754b54bf--dz9mw-eth0" Dec 16 12:25:17.955000 audit[4662]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:17.955000 audit[4662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff0160610 a2=0 a3=1 items=0 ppid=2883 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:17.960000 audit[4662]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:17.960000 audit[4662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff0160610 a2=0 a3=1 items=0 ppid=2883 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:17.975000 audit[4664]: NETFILTER_CFG table=filter:136 family=2 entries=45 op=nft_register_chain pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:17.975000 audit[4664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=ffffd0975640 a2=0 a3=ffff8590afa8 items=0 ppid=4377 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:17.975000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:18.002075 containerd[1581]: time="2025-12-16T12:25:18.002026779Z" level=info msg="connecting to shim 805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d" address="unix:///run/containerd/s/909e32d7164eed6794491fbcfded33bccaa38fb6e5d6ebf165b4441f4734cb0c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:18.037205 systemd[1]: Started cri-containerd-805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d.scope - libcontainer container 805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d. Dec 16 12:25:18.049000 audit: BPF prog-id=241 op=LOAD Dec 16 12:25:18.049000 audit: BPF prog-id=242 op=LOAD Dec 16 12:25:18.049000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.049000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:25:18.049000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.049000 audit: BPF prog-id=243 op=LOAD Dec 16 12:25:18.049000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.050000 audit: BPF prog-id=244 op=LOAD Dec 16 12:25:18.050000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.050000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:25:18.050000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.050000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:25:18.050000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.050000 audit: BPF prog-id=245 op=LOAD Dec 16 12:25:18.050000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4673 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830356466643535386631646465383765336164323735383437623465 Dec 16 12:25:18.051778 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:18.077802 containerd[1581]: time="2025-12-16T12:25:18.077756268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67754b54bf-dz9mw,Uid:9f50ced9-6722-4b8a-92ec-e6e3732665dc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"805dfd558f1dde87e3ad275847b4ede34b8568f0776338af46581bbde008e19d\"" Dec 16 12:25:18.079965 containerd[1581]: time="2025-12-16T12:25:18.079836870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:18.130089 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Dec 16 12:25:18.293858 containerd[1581]: time="2025-12-16T12:25:18.293729713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:18.301561 containerd[1581]: time="2025-12-16T12:25:18.301493216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:18.301781 containerd[1581]: time="2025-12-16T12:25:18.301513018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:18.301899 kubelet[2747]: E1216 12:25:18.301856 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:18.302055 kubelet[2747]: E1216 12:25:18.302034 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:18.302337 kubelet[2747]: E1216 12:25:18.302294 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhdxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-dz9mw_calico-apiserver(9f50ced9-6722-4b8a-92ec-e6e3732665dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:18.303734 kubelet[2747]: E1216 12:25:18.303661 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:18.642121 systemd-networkd[1489]: cali6a776e7811b: Gained IPv6LL Dec 16 12:25:18.917342 kubelet[2747]: E1216 12:25:18.916803 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:18.917782 kubelet[2747]: E1216 12:25:18.917751 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:18.918126 kubelet[2747]: E1216 12:25:18.917993 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:18.958000 audit[4709]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:18.958000 audit[4709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe53260c0 a2=0 a3=1 items=0 ppid=2883 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:18.971000 audit[4709]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:18.971000 audit[4709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe53260c0 a2=0 a3=1 items=0 ppid=2883 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:18.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:19.473244 systemd-networkd[1489]: calid7bc1664e10: Gained IPv6LL Dec 16 12:25:19.729883 containerd[1581]: time="2025-12-16T12:25:19.729465851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ndhz8,Uid:179aa3f5-01af-4f0c-91ba-27b0e8267d2b,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:19.729883 containerd[1581]: time="2025-12-16T12:25:19.729483333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dccf794c6-mwtbf,Uid:7dc3261b-d36f-4639-8c3f-f9eff73dc960,Namespace:calico-system,Attempt:0,}" Dec 16 12:25:19.889483 systemd-networkd[1489]: cali2520576d083: Link UP Dec 16 12:25:19.891859 systemd-networkd[1489]: cali2520576d083: Gained carrier Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.800 [INFO][4718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0 calico-kube-controllers-7dccf794c6- calico-system 7dc3261b-d36f-4639-8c3f-f9eff73dc960 838 0 2025-12-16 12:24:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dccf794c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dccf794c6-mwtbf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2520576d083 [] [] }} ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.800 [INFO][4718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.837 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" HandleID="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Workload="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.837 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" HandleID="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Workload="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c37f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dccf794c6-mwtbf", "timestamp":"2025-12-16 12:25:19.837552044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.837 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.837 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.837 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.848 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.854 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.863 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.866 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.870 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.870 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.872 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393 Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.877 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.884 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.884 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" host="localhost" Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.884 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:19.908287 containerd[1581]: 2025-12-16 12:25:19.884 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" HandleID="k8s-pod-network.b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Workload="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.887 [INFO][4718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0", GenerateName:"calico-kube-controllers-7dccf794c6-", Namespace:"calico-system", SelfLink:"", UID:"7dc3261b-d36f-4639-8c3f-f9eff73dc960", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dccf794c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dccf794c6-mwtbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2520576d083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.887 [INFO][4718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.887 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2520576d083 ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.890 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.893 [INFO][4718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0", GenerateName:"calico-kube-controllers-7dccf794c6-", Namespace:"calico-system", SelfLink:"", UID:"7dc3261b-d36f-4639-8c3f-f9eff73dc960", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dccf794c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393", Pod:"calico-kube-controllers-7dccf794c6-mwtbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2520576d083", MAC:"7e:b6:78:c8:f0:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:19.909290 containerd[1581]: 2025-12-16 12:25:19.906 [INFO][4718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" Namespace="calico-system" Pod="calico-kube-controllers-7dccf794c6-mwtbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dccf794c6--mwtbf-eth0" Dec 16 12:25:19.918385 kubelet[2747]: E1216 12:25:19.918308 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:19.921423 kubelet[2747]: E1216 12:25:19.921361 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:19.923000 audit[4769]: NETFILTER_CFG table=filter:139 family=2 entries=48 op=nft_register_chain pid=4769 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:19.928330 kernel: kauditd_printk_skb: 389 callbacks suppressed Dec 16 12:25:19.928470 kernel: audit: type=1325 audit(1765887919.923:729): table=filter:139 family=2 entries=48 op=nft_register_chain pid=4769 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:19.928494 kernel: audit: type=1300 audit(1765887919.923:729): arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=fffff1b83fc0 a2=0 a3=ffffb3278fa8 items=0 ppid=4377 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:19.923000 audit[4769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=fffff1b83fc0 a2=0 a3=ffffb3278fa8 items=0 ppid=4377 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:19.923000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:19.935297 kernel: audit: type=1327 audit(1765887919.923:729): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:19.955295 containerd[1581]: time="2025-12-16T12:25:19.955055309Z" level=info msg="connecting to shim b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393" address="unix:///run/containerd/s/677b50357cca3cb8f17f95365e8af11245c814e4fadad1d49d94c959c7cf14ae" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:19.992393 systemd[1]: Started cri-containerd-b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393.scope - libcontainer container b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393. Dec 16 12:25:20.004000 audit[4811]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:20.007935 kernel: audit: type=1325 audit(1765887920.004:730): table=filter:140 family=2 entries=14 op=nft_register_rule pid=4811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:20.008006 kernel: audit: type=1300 audit(1765887920.004:730): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0e02070 a2=0 a3=1 items=0 ppid=2883 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.004000 audit[4811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc0e02070 a2=0 a3=1 items=0 ppid=2883 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:20.013171 kernel: audit: type=1327 audit(1765887920.004:730): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:20.013300 kernel: audit: type=1334 audit(1765887920.010:731): prog-id=246 op=LOAD Dec 16 12:25:20.010000 audit: BPF prog-id=246 op=LOAD Dec 16 12:25:20.011000 audit: BPF prog-id=247 op=LOAD Dec 16 12:25:20.011000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.019118 kernel: audit: type=1334 audit(1765887920.011:732): prog-id=247 op=LOAD Dec 16 12:25:20.019220 kernel: audit: type=1300 audit(1765887920.011:732): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.024954 kernel: audit: type=1327 audit(1765887920.011:732): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.024994 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:20.013000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:25:20.013000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.013000 audit: BPF prog-id=248 op=LOAD Dec 16 12:25:20.013000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.013000 audit: BPF prog-id=249 op=LOAD Dec 16 12:25:20.013000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.013000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:25:20.013000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.013000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:25:20.013000 audit[4790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.014000 audit: BPF prog-id=250 op=LOAD Dec 16 12:25:20.014000 audit[4790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4779 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303236646639656133646133303964633163303338333337656130 Dec 16 12:25:20.027000 audit[4811]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:20.027000 audit[4811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc0e02070 a2=0 a3=1 items=0 ppid=2883 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:20.033349 systemd-networkd[1489]: calibe5bd7060f5: Link UP Dec 16 12:25:20.034846 systemd-networkd[1489]: calibe5bd7060f5: Gained carrier Dec 16 12:25:20.060900 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:39574.service - OpenSSH per-connection server daemon (10.0.0.1:39574). Dec 16 12:25:20.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.36:22-10.0.0.1:39574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.805 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ndhz8-eth0 csi-node-driver- calico-system 179aa3f5-01af-4f0c-91ba-27b0e8267d2b 733 0 2025-12-16 12:24:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ndhz8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibe5bd7060f5 [] [] }} ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.806 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.843 [INFO][4751] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" HandleID="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Workload="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.843 [INFO][4751] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" HandleID="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Workload="localhost-k8s-csi--node--driver--ndhz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ndhz8", "timestamp":"2025-12-16 12:25:19.843342343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.843 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.885 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.885 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.953 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.972 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.981 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.984 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.987 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.987 [INFO][4751] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.990 [INFO][4751] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848 Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:19.998 [INFO][4751] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:20.014 [INFO][4751] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:20.019 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" host="localhost" Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:20.019 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:25:20.063319 containerd[1581]: 2025-12-16 12:25:20.019 [INFO][4751] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" HandleID="k8s-pod-network.122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Workload="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.026 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ndhz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"179aa3f5-01af-4f0c-91ba-27b0e8267d2b", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ndhz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe5bd7060f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.026 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.026 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe5bd7060f5 ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.036 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.038 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ndhz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"179aa3f5-01af-4f0c-91ba-27b0e8267d2b", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848", Pod:"csi-node-driver-ndhz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe5bd7060f5", MAC:"c6:53:dc:11:51:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:25:20.064237 containerd[1581]: 2025-12-16 12:25:20.058 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" Namespace="calico-system" Pod="csi-node-driver-ndhz8" WorkloadEndpoint="localhost-k8s-csi--node--driver--ndhz8-eth0" Dec 16 12:25:20.077198 containerd[1581]: time="2025-12-16T12:25:20.076596782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dccf794c6-mwtbf,Uid:7dc3261b-d36f-4639-8c3f-f9eff73dc960,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0026df9ea3da309dc1c038337ea03ac1fff1af33504eca4ee4694186a526393\"" Dec 16 12:25:20.075000 audit[4829]: NETFILTER_CFG table=filter:142 family=2 entries=52 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:25:20.075000 audit[4829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffef701d70 a2=0 a3=ffffa80e7fa8 items=0 ppid=4377 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.075000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:25:20.078831 containerd[1581]: time="2025-12-16T12:25:20.078785666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:25:20.102553 containerd[1581]: time="2025-12-16T12:25:20.102488473Z" level=info msg="connecting to shim 122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848" address="unix:///run/containerd/s/b2be1386b9ea5afabb55b9c928e9dac1f2626392ffd344999a3a1d6d89ee7a61" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:20.133224 systemd[1]: Started cri-containerd-122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848.scope - libcontainer container 122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848. Dec 16 12:25:20.143000 audit: BPF prog-id=251 op=LOAD Dec 16 12:25:20.143000 audit: BPF prog-id=252 op=LOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.143000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.143000 audit: BPF prog-id=253 op=LOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.143000 audit: BPF prog-id=254 op=LOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.143000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.143000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:25:20.143000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.144000 audit: BPF prog-id=255 op=LOAD Dec 16 12:25:20.144000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4840 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326532656637373734323536376330643439616666333638656666 Dec 16 12:25:20.145989 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:25:20.150000 audit[4822]: USER_ACCT pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.151508 sshd[4822]: Accepted publickey for core from 10.0.0.1 port 39574 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:20.152000 audit[4822]: CRED_ACQ pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.152000 audit[4822]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff9b5580 a2=3 a3=0 items=0 ppid=1 pid=4822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:20.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:20.154632 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:20.161882 systemd-logind[1558]: New session 9 of user core. Dec 16 12:25:20.167169 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:25:20.171097 containerd[1581]: time="2025-12-16T12:25:20.171030725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ndhz8,Uid:179aa3f5-01af-4f0c-91ba-27b0e8267d2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"122e2ef77742567c0d49aff368eff90861c9870d8c3546efc8b0cc207df4c848\"" Dec 16 12:25:20.170000 audit[4822]: USER_START pid=4822 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.172000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.297726 containerd[1581]: time="2025-12-16T12:25:20.297657982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:20.298912 containerd[1581]: time="2025-12-16T12:25:20.298840594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:25:20.299000 containerd[1581]: time="2025-12-16T12:25:20.298933765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:20.299234 kubelet[2747]: E1216 12:25:20.299187 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:25:20.299333 kubelet[2747]: E1216 12:25:20.299251 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:25:20.299646 containerd[1581]: time="2025-12-16T12:25:20.299617801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:25:20.301248 kubelet[2747]: E1216 12:25:20.299536 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njlhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dccf794c6-mwtbf_calico-system(7dc3261b-d36f-4639-8c3f-f9eff73dc960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:20.302432 kubelet[2747]: E1216 12:25:20.302371 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:20.331960 sshd[4879]: Connection closed by 10.0.0.1 port 39574 Dec 16 12:25:20.332603 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:20.333000 audit[4822]: USER_END pid=4822 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.333000 audit[4822]: CRED_DISP pid=4822 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:20.337644 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:39574.service: Deactivated successfully. Dec 16 12:25:20.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.36:22-10.0.0.1:39574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:20.340579 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:25:20.341775 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:25:20.343017 systemd-logind[1558]: Removed session 9. Dec 16 12:25:20.479556 kubelet[2747]: I1216 12:25:20.479466 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:25:20.480055 kubelet[2747]: E1216 12:25:20.480036 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:20.519241 containerd[1581]: time="2025-12-16T12:25:20.519194875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:20.520571 containerd[1581]: time="2025-12-16T12:25:20.520513863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:25:20.520672 containerd[1581]: time="2025-12-16T12:25:20.520629796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:20.521467 kubelet[2747]: E1216 12:25:20.520817 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:25:20.521467 kubelet[2747]: E1216 12:25:20.520901 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:25:20.522131 kubelet[2747]: E1216 12:25:20.522070 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:20.525197 containerd[1581]: time="2025-12-16T12:25:20.525142499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:25:20.797028 containerd[1581]: time="2025-12-16T12:25:20.796965647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:20.804040 containerd[1581]: time="2025-12-16T12:25:20.803967148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:25:20.804163 containerd[1581]: time="2025-12-16T12:25:20.804092963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:20.804341 kubelet[2747]: E1216 12:25:20.804299 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:25:20.804425 kubelet[2747]: E1216 12:25:20.804355 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:25:20.804541 kubelet[2747]: E1216 12:25:20.804495 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:20.805797 kubelet[2747]: E1216 12:25:20.805741 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:20.923160 kubelet[2747]: E1216 12:25:20.923124 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:20.924624 kubelet[2747]: E1216 12:25:20.924123 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:20.924862 kubelet[2747]: E1216 12:25:20.924780 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:20.934230 kubelet[2747]: E1216 12:25:20.934187 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:21.393142 systemd-networkd[1489]: cali2520576d083: Gained IPv6LL Dec 16 12:25:21.926547 kubelet[2747]: E1216 12:25:21.926402 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:21.928019 kubelet[2747]: E1216 12:25:21.927340 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:21.969077 systemd-networkd[1489]: calibe5bd7060f5: Gained IPv6LL Dec 16 12:25:22.731138 containerd[1581]: time="2025-12-16T12:25:22.731093047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:25:22.989439 containerd[1581]: time="2025-12-16T12:25:22.989307291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:22.993713 containerd[1581]: time="2025-12-16T12:25:22.993629876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:25:22.993965 containerd[1581]: time="2025-12-16T12:25:22.993725206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:22.994007 kubelet[2747]: E1216 12:25:22.993955 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:22.994319 kubelet[2747]: E1216 12:25:22.994010 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:22.994319 kubelet[2747]: E1216 12:25:22.994161 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:68ad85c0c4be4b809ac5804e8fb5f9e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:22.996227 containerd[1581]: time="2025-12-16T12:25:22.996194752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:25:23.244835 containerd[1581]: time="2025-12-16T12:25:23.244706417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:23.245769 containerd[1581]: time="2025-12-16T12:25:23.245730326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:25:23.245850 containerd[1581]: time="2025-12-16T12:25:23.245791932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:23.246016 kubelet[2747]: E1216 12:25:23.245978 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:23.246084 kubelet[2747]: E1216 12:25:23.246029 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:23.247199 kubelet[2747]: E1216 12:25:23.246152 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:23.247344 kubelet[2747]: E1216 12:25:23.247308 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:25:25.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.36:22-10.0.0.1:42894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.346985 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:42894.service - OpenSSH per-connection server daemon (10.0.0.1:42894). Dec 16 12:25:25.351072 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:25:25.351149 kernel: audit: type=1130 audit(1765887925.346:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.36:22-10.0.0.1:42894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.398000 audit[4952]: USER_ACCT pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.399695 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 42894 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:25.402000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.404026 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:25.407232 kernel: audit: type=1101 audit(1765887925.398:759): pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.407302 kernel: audit: type=1103 audit(1765887925.402:760): pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.409466 kernel: audit: type=1006 audit(1765887925.402:761): pid=4952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:25:25.402000 audit[4952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5af7a30 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:25.413334 kernel: audit: type=1300 audit(1765887925.402:761): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5af7a30 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:25.402000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:25.416825 kernel: audit: type=1327 audit(1765887925.402:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:25.422915 systemd-logind[1558]: New session 10 of user core. Dec 16 12:25:25.435313 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:25:25.436000 audit[4952]: USER_START pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.440000 audit[4955]: CRED_ACQ pid=4955 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.446498 kernel: audit: type=1105 audit(1765887925.436:762): pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.446606 kernel: audit: type=1103 audit(1765887925.440:763): pid=4955 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.559819 sshd[4955]: Connection closed by 10.0.0.1 port 42894 Dec 16 12:25:25.560350 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:25.561000 audit[4952]: USER_END pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.561000 audit[4952]: CRED_DISP pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.572529 kernel: audit: type=1106 audit(1765887925.561:764): pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.572694 kernel: audit: type=1104 audit(1765887925.561:765): pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.580588 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:42894.service: Deactivated successfully. Dec 16 12:25:25.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.36:22-10.0.0.1:42894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.583247 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:25:25.584207 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:25:25.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.36:22-10.0.0.1:42898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.587889 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:42898.service - OpenSSH per-connection server daemon (10.0.0.1:42898). Dec 16 12:25:25.590341 systemd-logind[1558]: Removed session 10. Dec 16 12:25:25.659000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.660322 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 42898 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:25.661000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.662000 audit[4969]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5bbb420 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:25.662000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:25.663363 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:25.670989 systemd-logind[1558]: New session 11 of user core. Dec 16 12:25:25.687171 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:25:25.688000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.690000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.844832 sshd[4972]: Connection closed by 10.0.0.1 port 42898 Dec 16 12:25:25.846000 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:25.846000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.847000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.862582 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:42898.service: Deactivated successfully. Dec 16 12:25:25.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.36:22-10.0.0.1:42898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.866564 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:25:25.868368 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:25:25.873569 systemd-logind[1558]: Removed session 11. Dec 16 12:25:25.876291 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:42902.service - OpenSSH per-connection server daemon (10.0.0.1:42902). Dec 16 12:25:25.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.36:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:25.944000 audit[4984]: USER_ACCT pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.946724 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:25.946000 audit[4984]: CRED_ACQ pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.946000 audit[4984]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4ffe2a0 a2=3 a3=0 items=0 ppid=1 pid=4984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:25.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:25.949152 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:25.956730 systemd-logind[1558]: New session 12 of user core. Dec 16 12:25:25.964176 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:25:25.964000 audit[4984]: USER_START pid=4984 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:25.966000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:26.103937 sshd[4987]: Connection closed by 10.0.0.1 port 42902 Dec 16 12:25:26.104831 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:26.104000 audit[4984]: USER_END pid=4984 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:26.104000 audit[4984]: CRED_DISP pid=4984 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:26.109377 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:42902.service: Deactivated successfully. Dec 16 12:25:26.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.36:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:26.113514 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:25:26.114664 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:25:26.116543 systemd-logind[1558]: Removed session 12. Dec 16 12:25:28.730808 containerd[1581]: time="2025-12-16T12:25:28.730482322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:25:28.954259 containerd[1581]: time="2025-12-16T12:25:28.954217191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:28.955399 containerd[1581]: time="2025-12-16T12:25:28.955360023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:25:28.955490 containerd[1581]: time="2025-12-16T12:25:28.955437991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:28.955668 kubelet[2747]: E1216 12:25:28.955611 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:28.956021 kubelet[2747]: E1216 12:25:28.955670 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:28.956021 kubelet[2747]: E1216 12:25:28.955819 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9dd2_calico-system(c186cd40-d4dc-48c3-8fe5-5af674baa410): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:28.957197 kubelet[2747]: E1216 12:25:28.957142 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:29.730269 containerd[1581]: time="2025-12-16T12:25:29.730230657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:29.952003 containerd[1581]: time="2025-12-16T12:25:29.951956488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:29.953271 containerd[1581]: time="2025-12-16T12:25:29.953224850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:29.953761 containerd[1581]: time="2025-12-16T12:25:29.953268695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:29.953965 kubelet[2747]: E1216 12:25:29.953435 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:29.953965 kubelet[2747]: E1216 12:25:29.953482 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:29.953965 kubelet[2747]: E1216 12:25:29.953676 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-t5zll_calico-apiserver(1414ae41-c2cb-4936-90b7-c8854a1bb586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:29.954928 kubelet[2747]: E1216 12:25:29.954876 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:31.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.36:22-10.0.0.1:55134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:31.124214 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Dec 16 12:25:31.127371 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:25:31.127457 kernel: audit: type=1130 audit(1765887931.123:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.36:22-10.0.0.1:55134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:31.187000 audit[5014]: USER_ACCT pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.188294 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:31.190000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.191532 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:31.194822 kernel: audit: type=1101 audit(1765887931.187:786): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.194900 kernel: audit: type=1103 audit(1765887931.190:787): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.195023 kernel: audit: type=1006 audit(1765887931.190:788): pid=5014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:25:31.190000 audit[5014]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe661e1d0 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:31.199383 systemd-logind[1558]: New session 13 of user core. Dec 16 12:25:31.200419 kernel: audit: type=1300 audit(1765887931.190:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe661e1d0 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:31.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:31.202038 kernel: audit: type=1327 audit(1765887931.190:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:31.211159 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:25:31.212000 audit[5014]: USER_START pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.214000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.221400 kernel: audit: type=1105 audit(1765887931.212:789): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.221465 kernel: audit: type=1103 audit(1765887931.214:790): pid=5017 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.318009 sshd[5017]: Connection closed by 10.0.0.1 port 55134 Dec 16 12:25:31.319086 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:31.322000 audit[5014]: USER_END pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.326281 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:25:31.326982 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:25:31.322000 audit[5014]: CRED_DISP pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.328727 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:55134.service: Deactivated successfully. Dec 16 12:25:31.330416 kernel: audit: type=1106 audit(1765887931.322:791): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.330464 kernel: audit: type=1104 audit(1765887931.322:792): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:31.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.36:22-10.0.0.1:55134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:31.332173 systemd-logind[1558]: Removed session 13. Dec 16 12:25:32.733566 containerd[1581]: time="2025-12-16T12:25:32.733107635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:25:32.955293 containerd[1581]: time="2025-12-16T12:25:32.955225920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:32.969033 containerd[1581]: time="2025-12-16T12:25:32.968865753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:25:32.969033 containerd[1581]: time="2025-12-16T12:25:32.968902316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:32.969206 kubelet[2747]: E1216 12:25:32.969089 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:25:32.969206 kubelet[2747]: E1216 12:25:32.969141 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:25:32.969992 kubelet[2747]: E1216 12:25:32.969277 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:32.972579 containerd[1581]: time="2025-12-16T12:25:32.972517933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:25:33.197288 containerd[1581]: time="2025-12-16T12:25:33.197182183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:33.198772 containerd[1581]: time="2025-12-16T12:25:33.198681322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:25:33.198940 containerd[1581]: time="2025-12-16T12:25:33.198759569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:33.199026 kubelet[2747]: E1216 12:25:33.198984 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:25:33.199086 kubelet[2747]: E1216 12:25:33.199045 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:25:33.199357 kubelet[2747]: E1216 12:25:33.199177 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:33.200743 kubelet[2747]: E1216 12:25:33.200657 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:33.730847 containerd[1581]: time="2025-12-16T12:25:33.730801530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:34.032616 containerd[1581]: time="2025-12-16T12:25:34.032555001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:34.037429 containerd[1581]: time="2025-12-16T12:25:34.037344918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:34.037429 containerd[1581]: time="2025-12-16T12:25:34.037397483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:34.037678 kubelet[2747]: E1216 12:25:34.037606 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:34.037678 kubelet[2747]: E1216 12:25:34.037654 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:34.038174 kubelet[2747]: E1216 12:25:34.037800 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhdxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-dz9mw_calico-apiserver(9f50ced9-6722-4b8a-92ec-e6e3732665dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:34.039963 kubelet[2747]: E1216 12:25:34.039862 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:35.733456 containerd[1581]: time="2025-12-16T12:25:35.733402038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:25:35.942109 containerd[1581]: time="2025-12-16T12:25:35.941997766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:35.945221 containerd[1581]: time="2025-12-16T12:25:35.945099246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:25:35.945313 containerd[1581]: time="2025-12-16T12:25:35.945128009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:35.945922 kubelet[2747]: E1216 12:25:35.945572 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:25:35.945922 kubelet[2747]: E1216 12:25:35.945629 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:25:35.945922 kubelet[2747]: E1216 12:25:35.945780 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njlhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dccf794c6-mwtbf_calico-system(7dc3261b-d36f-4639-8c3f-f9eff73dc960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:35.947626 kubelet[2747]: E1216 12:25:35.947260 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:36.331561 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:55148.service - OpenSSH per-connection server daemon (10.0.0.1:55148). Dec 16 12:25:36.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.36:22-10.0.0.1:55148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.335840 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:25:36.335975 kernel: audit: type=1130 audit(1765887936.330:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.36:22-10.0.0.1:55148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.403000 audit[5032]: USER_ACCT pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.405709 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 55148 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:36.411196 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:36.408000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.416459 systemd-logind[1558]: New session 14 of user core. Dec 16 12:25:36.417839 kernel: audit: type=1101 audit(1765887936.403:795): pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.417919 kernel: audit: type=1103 audit(1765887936.408:796): pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.417938 kernel: audit: type=1006 audit(1765887936.408:797): pid=5032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:25:36.408000 audit[5032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5eb23a0 a2=3 a3=0 items=0 ppid=1 pid=5032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:36.423427 kernel: audit: type=1300 audit(1765887936.408:797): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5eb23a0 a2=3 a3=0 items=0 ppid=1 pid=5032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:36.423490 kernel: audit: type=1327 audit(1765887936.408:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:36.408000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:36.430237 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:25:36.430000 audit[5032]: USER_START pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.432000 audit[5035]: CRED_ACQ pid=5035 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.439592 kernel: audit: type=1105 audit(1765887936.430:798): pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.439640 kernel: audit: type=1103 audit(1765887936.432:799): pid=5035 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.519330 sshd[5035]: Connection closed by 10.0.0.1 port 55148 Dec 16 12:25:36.519734 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:36.519000 audit[5032]: USER_END pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.519000 audit[5032]: CRED_DISP pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.525180 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:25:36.525384 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:55148.service: Deactivated successfully. Dec 16 12:25:36.527678 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:25:36.528177 kernel: audit: type=1106 audit(1765887936.519:800): pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.528276 kernel: audit: type=1104 audit(1765887936.519:801): pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:36.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.36:22-10.0.0.1:55148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.530086 systemd-logind[1558]: Removed session 14. Dec 16 12:25:37.735211 kubelet[2747]: E1216 12:25:37.733377 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:25:41.535883 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:56190.service - OpenSSH per-connection server daemon (10.0.0.1:56190). Dec 16 12:25:41.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.36:22-10.0.0.1:56190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:41.539548 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:25:41.539696 kernel: audit: type=1130 audit(1765887941.534:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.36:22-10.0.0.1:56190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:41.628462 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 56190 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:41.626000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.630000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.633193 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:41.636144 kernel: audit: type=1101 audit(1765887941.626:804): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.636248 kernel: audit: type=1103 audit(1765887941.630:805): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.639456 kernel: audit: type=1006 audit(1765887941.630:806): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:25:41.630000 audit[5059]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb325830 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:41.644075 kernel: audit: type=1300 audit(1765887941.630:806): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb325830 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:41.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:41.645615 kernel: audit: type=1327 audit(1765887941.630:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:41.647896 systemd-logind[1558]: New session 15 of user core. Dec 16 12:25:41.659211 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:25:41.660000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.663000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.670059 kernel: audit: type=1105 audit(1765887941.660:807): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.670238 kernel: audit: type=1103 audit(1765887941.663:808): pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.731052 kubelet[2747]: E1216 12:25:41.730868 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:41.868847 sshd[5062]: Connection closed by 10.0.0.1 port 56190 Dec 16 12:25:41.870161 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:41.870000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.874570 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:56190.service: Deactivated successfully. Dec 16 12:25:41.870000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.878016 kernel: audit: type=1106 audit(1765887941.870:809): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.878122 kernel: audit: type=1104 audit(1765887941.870:810): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:41.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.36:22-10.0.0.1:56190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:41.878351 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:25:41.880096 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:25:41.881517 systemd-logind[1558]: Removed session 15. Dec 16 12:25:43.729867 kubelet[2747]: E1216 12:25:43.729772 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:46.731228 kubelet[2747]: E1216 12:25:46.731122 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:25:46.885869 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Dec 16 12:25:46.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.36:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:46.889522 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:25:46.889615 kernel: audit: type=1130 audit(1765887946.885:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.36:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:46.952000 audit[5080]: USER_ACCT pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.953573 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:46.956078 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:46.954000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.959969 kernel: audit: type=1101 audit(1765887946.952:813): pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.960115 kernel: audit: type=1103 audit(1765887946.954:814): pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.960151 kernel: audit: type=1006 audit(1765887946.954:815): pid=5080 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:25:46.961681 kernel: audit: type=1300 audit(1765887946.954:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff76eb290 a2=3 a3=0 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.954000 audit[5080]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff76eb290 a2=3 a3=0 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.962656 systemd-logind[1558]: New session 16 of user core. Dec 16 12:25:46.954000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:46.966087 kernel: audit: type=1327 audit(1765887946.954:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:46.968135 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:25:46.969000 audit[5080]: USER_START pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.974065 kernel: audit: type=1105 audit(1765887946.969:816): pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.974158 kernel: audit: type=1103 audit(1765887946.972:817): pid=5083 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:46.972000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.095950 sshd[5083]: Connection closed by 10.0.0.1 port 56194 Dec 16 12:25:47.096460 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:47.096000 audit[5080]: USER_END pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.096000 audit[5080]: CRED_DISP pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.104443 kernel: audit: type=1106 audit(1765887947.096:818): pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.104499 kernel: audit: type=1104 audit(1765887947.096:819): pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.36:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:47.109565 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:56194.service: Deactivated successfully. Dec 16 12:25:47.111255 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:25:47.113120 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:25:47.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.36:22-10.0.0.1:56210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:47.115532 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:56210.service - OpenSSH per-connection server daemon (10.0.0.1:56210). Dec 16 12:25:47.117349 systemd-logind[1558]: Removed session 16. Dec 16 12:25:47.188000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.190118 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 56210 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:47.190000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.190000 audit[5098]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0ab33e0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:47.192380 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:47.201223 systemd-logind[1558]: New session 17 of user core. Dec 16 12:25:47.207166 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:25:47.211000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.214000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.396734 sshd[5101]: Connection closed by 10.0.0.1 port 56210 Dec 16 12:25:47.396718 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:47.398000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.398000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.409119 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:56210.service: Deactivated successfully. Dec 16 12:25:47.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.36:22-10.0.0.1:56210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:47.411882 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:25:47.415589 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:25:47.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.36:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:47.418937 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:56218.service - OpenSSH per-connection server daemon (10.0.0.1:56218). Dec 16 12:25:47.421016 systemd-logind[1558]: Removed session 17. Dec 16 12:25:47.492000 audit[5113]: USER_ACCT pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.493774 sshd[5113]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:47.493000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.493000 audit[5113]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7ee03c0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:47.493000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:47.495185 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:47.500837 systemd-logind[1558]: New session 18 of user core. Dec 16 12:25:47.508159 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:25:47.511000 audit[5113]: USER_START pid=5113 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.513000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:47.730418 kubelet[2747]: E1216 12:25:47.729953 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:47.730546 kubelet[2747]: E1216 12:25:47.730420 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:25:48.225000 audit[5129]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:48.225000 audit[5129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffab27900 a2=0 a3=1 items=0 ppid=2883 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:48.234636 sshd[5116]: Connection closed by 10.0.0.1 port 56218 Dec 16 12:25:48.234000 audit[5129]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:48.236242 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:48.238000 audit[5113]: USER_END pid=5113 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.238000 audit[5113]: CRED_DISP pid=5113 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.234000 audit[5129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffab27900 a2=0 a3=1 items=0 ppid=2883 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:48.246770 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:56218.service: Deactivated successfully. Dec 16 12:25:48.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.36:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:48.250940 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:25:48.253000 audit[5133]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:48.253000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcea9cd60 a2=0 a3=1 items=0 ppid=2883 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:48.255060 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:25:48.259743 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:56230.service - OpenSSH per-connection server daemon (10.0.0.1:56230). Dec 16 12:25:48.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.36:22-10.0.0.1:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:48.259000 audit[5133]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:48.259000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcea9cd60 a2=0 a3=1 items=0 ppid=2883 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:48.262288 systemd-logind[1558]: Removed session 18. Dec 16 12:25:48.327000 audit[5136]: USER_ACCT pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.329044 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 56230 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:48.329000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.329000 audit[5136]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd529bd30 a2=3 a3=0 items=0 ppid=1 pid=5136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.329000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:48.330693 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:48.337566 systemd-logind[1558]: New session 19 of user core. Dec 16 12:25:48.345185 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:25:48.346000 audit[5136]: USER_START pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.348000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.631947 sshd[5139]: Connection closed by 10.0.0.1 port 56230 Dec 16 12:25:48.631867 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:48.635000 audit[5136]: USER_END pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.635000 audit[5136]: CRED_DISP pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.641633 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:56230.service: Deactivated successfully. Dec 16 12:25:48.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.36:22-10.0.0.1:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:48.644497 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:25:48.645845 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:25:48.649799 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:56232.service - OpenSSH per-connection server daemon (10.0.0.1:56232). Dec 16 12:25:48.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.36:22-10.0.0.1:56232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:48.650787 systemd-logind[1558]: Removed session 19. Dec 16 12:25:48.714240 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 56232 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:48.713000 audit[5151]: USER_ACCT pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.714000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.714000 audit[5151]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3f3eb60 a2=3 a3=0 items=0 ppid=1 pid=5151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:48.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:48.715685 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:48.720195 systemd-logind[1558]: New session 20 of user core. Dec 16 12:25:48.731022 kubelet[2747]: E1216 12:25:48.730949 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:25:48.731192 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:25:48.732765 containerd[1581]: time="2025-12-16T12:25:48.732113887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:25:48.732000 audit[5151]: USER_START pid=5151 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.734000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.964128 sshd[5154]: Connection closed by 10.0.0.1 port 56232 Dec 16 12:25:48.964496 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:48.965000 audit[5151]: USER_END pid=5151 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.965000 audit[5151]: CRED_DISP pid=5151 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:48.970376 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:56232.service: Deactivated successfully. Dec 16 12:25:48.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.36:22-10.0.0.1:56232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:48.973046 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:25:48.975506 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:25:48.978057 systemd-logind[1558]: Removed session 20. Dec 16 12:25:49.060508 containerd[1581]: time="2025-12-16T12:25:49.060445902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:49.067411 containerd[1581]: time="2025-12-16T12:25:49.067296385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:25:49.067411 containerd[1581]: time="2025-12-16T12:25:49.067343942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:49.067634 kubelet[2747]: E1216 12:25:49.067565 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:49.067634 kubelet[2747]: E1216 12:25:49.067623 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:25:49.067966 kubelet[2747]: E1216 12:25:49.067750 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:68ad85c0c4be4b809ac5804e8fb5f9e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:49.069888 containerd[1581]: time="2025-12-16T12:25:49.069851211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:25:49.299413 containerd[1581]: time="2025-12-16T12:25:49.299340508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:49.300600 containerd[1581]: time="2025-12-16T12:25:49.300555805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:25:49.300690 containerd[1581]: time="2025-12-16T12:25:49.300653320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:49.301257 kubelet[2747]: E1216 12:25:49.300928 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:49.301257 kubelet[2747]: E1216 12:25:49.300992 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:25:49.301257 kubelet[2747]: E1216 12:25:49.301132 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786b7dd598-6wh88_calico-system(0f73049e-3478-4f3a-8d48-04802f1162ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:49.302942 kubelet[2747]: E1216 12:25:49.302707 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:25:52.732950 kubelet[2747]: E1216 12:25:52.732169 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:53.980865 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:38410.service - OpenSSH per-connection server daemon (10.0.0.1:38410). Dec 16 12:25:53.985129 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:25:53.985177 kernel: audit: type=1130 audit(1765887953.980:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.36:22-10.0.0.1:38410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:53.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.36:22-10.0.0.1:38410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:54.058000 audit[5196]: USER_ACCT pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.063189 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 38410 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:54.063957 kernel: audit: type=1101 audit(1765887954.058:862): pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.064021 kernel: audit: type=1103 audit(1765887954.062:863): pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.062000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.064261 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:54.069632 kernel: audit: type=1006 audit(1765887954.062:864): pid=5196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 12:25:54.069741 kernel: audit: type=1300 audit(1765887954.062:864): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6bfa720 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:54.062000 audit[5196]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6bfa720 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:54.072604 systemd-logind[1558]: New session 21 of user core. Dec 16 12:25:54.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:54.074007 kernel: audit: type=1327 audit(1765887954.062:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:54.080178 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:25:54.085000 audit[5196]: USER_START pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.094074 kernel: audit: type=1105 audit(1765887954.085:865): pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.093000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.100951 kernel: audit: type=1103 audit(1765887954.093:866): pid=5199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.296003 sshd[5199]: Connection closed by 10.0.0.1 port 38410 Dec 16 12:25:54.297307 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:54.297000 audit[5196]: USER_END pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.302264 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:38410.service: Deactivated successfully. Dec 16 12:25:54.298000 audit[5196]: CRED_DISP pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.304985 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:25:54.306132 kernel: audit: type=1106 audit(1765887954.297:867): pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.306207 kernel: audit: type=1104 audit(1765887954.298:868): pid=5196 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:54.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.36:22-10.0.0.1:38410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:54.308351 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:25:54.310527 systemd-logind[1558]: Removed session 21. Dec 16 12:25:54.666000 audit[5212]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:54.666000 audit[5212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd76655f0 a2=0 a3=1 items=0 ppid=2883 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:54.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:54.676000 audit[5212]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:25:54.676000 audit[5212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd76655f0 a2=0 a3=1 items=0 ppid=2883 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:54.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:25:56.729109 kubelet[2747]: E1216 12:25:56.729068 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:25:56.731927 containerd[1581]: time="2025-12-16T12:25:56.731591584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:25:56.964888 containerd[1581]: time="2025-12-16T12:25:56.964826596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:56.967256 containerd[1581]: time="2025-12-16T12:25:56.967138926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:25:56.967405 containerd[1581]: time="2025-12-16T12:25:56.967155846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:56.967704 kubelet[2747]: E1216 12:25:56.967643 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:56.967771 kubelet[2747]: E1216 12:25:56.967736 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:25:56.968312 kubelet[2747]: E1216 12:25:56.968207 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9dd2_calico-system(c186cd40-d4dc-48c3-8fe5-5af674baa410): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:56.969483 kubelet[2747]: E1216 12:25:56.969423 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9dd2" podUID="c186cd40-d4dc-48c3-8fe5-5af674baa410" Dec 16 12:25:57.731632 containerd[1581]: time="2025-12-16T12:25:57.731573279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:57.986232 containerd[1581]: time="2025-12-16T12:25:57.986074957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:57.987528 containerd[1581]: time="2025-12-16T12:25:57.987476198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:57.987768 containerd[1581]: time="2025-12-16T12:25:57.987534637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:57.987930 kubelet[2747]: E1216 12:25:57.987881 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:57.988230 kubelet[2747]: E1216 12:25:57.987948 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:57.988230 kubelet[2747]: E1216 12:25:57.988104 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-t5zll_calico-apiserver(1414ae41-c2cb-4936-90b7-c8854a1bb586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:57.989369 kubelet[2747]: E1216 12:25:57.989306 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-t5zll" podUID="1414ae41-c2cb-4936-90b7-c8854a1bb586" Dec 16 12:25:59.321199 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:38424.service - OpenSSH per-connection server daemon (10.0.0.1:38424). Dec 16 12:25:59.325737 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:25:59.325861 kernel: audit: type=1130 audit(1765887959.321:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.36:22-10.0.0.1:38424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:59.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.36:22-10.0.0.1:38424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:59.407000 audit[5220]: USER_ACCT pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.408653 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 38424 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:59.413786 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:59.412000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.419125 kernel: audit: type=1101 audit(1765887959.407:873): pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.419223 kernel: audit: type=1103 audit(1765887959.412:874): pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.427006 systemd-logind[1558]: New session 22 of user core. Dec 16 12:25:59.427540 kernel: audit: type=1006 audit(1765887959.412:875): pid=5220 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:25:59.427574 kernel: audit: type=1300 audit(1765887959.412:875): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce03fc70 a2=3 a3=0 items=0 ppid=1 pid=5220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:59.412000 audit[5220]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce03fc70 a2=3 a3=0 items=0 ppid=1 pid=5220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:59.412000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:59.433647 kernel: audit: type=1327 audit(1765887959.412:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:59.435199 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:25:59.440000 audit[5220]: USER_START pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.445000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.449392 kernel: audit: type=1105 audit(1765887959.440:876): pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.449454 kernel: audit: type=1103 audit(1765887959.445:877): pid=5223 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.551021 sshd[5223]: Connection closed by 10.0.0.1 port 38424 Dec 16 12:25:59.551410 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:59.552000 audit[5220]: USER_END pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.556222 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:38424.service: Deactivated successfully. Dec 16 12:25:59.558077 kernel: audit: type=1106 audit(1765887959.552:878): pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.558188 kernel: audit: type=1104 audit(1765887959.553:879): pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.553000 audit[5220]: CRED_DISP pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:59.558487 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:25:59.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.36:22-10.0.0.1:38424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:59.561731 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:25:59.564183 systemd-logind[1558]: Removed session 22. Dec 16 12:25:59.731735 containerd[1581]: time="2025-12-16T12:25:59.731597994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:25:59.937560 containerd[1581]: time="2025-12-16T12:25:59.937380412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:25:59.951671 containerd[1581]: time="2025-12-16T12:25:59.951593298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:25:59.951798 containerd[1581]: time="2025-12-16T12:25:59.951657537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:25:59.951889 kubelet[2747]: E1216 12:25:59.951847 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:59.952244 kubelet[2747]: E1216 12:25:59.951898 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:25:59.952244 kubelet[2747]: E1216 12:25:59.952051 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhdxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67754b54bf-dz9mw_calico-apiserver(9f50ced9-6722-4b8a-92ec-e6e3732665dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:25:59.953321 kubelet[2747]: E1216 12:25:59.953267 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67754b54bf-dz9mw" podUID="9f50ced9-6722-4b8a-92ec-e6e3732665dc" Dec 16 12:26:00.730353 containerd[1581]: time="2025-12-16T12:26:00.730309512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:26:00.962653 containerd[1581]: time="2025-12-16T12:26:00.962592371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:00.964477 containerd[1581]: time="2025-12-16T12:26:00.964409695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:26:00.964886 containerd[1581]: time="2025-12-16T12:26:00.964461694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:00.964958 kubelet[2747]: E1216 12:26:00.964683 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:00.964958 kubelet[2747]: E1216 12:26:00.964747 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:26:00.965233 kubelet[2747]: E1216 12:26:00.964975 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:00.965336 containerd[1581]: time="2025-12-16T12:26:00.965210360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:26:01.178496 containerd[1581]: time="2025-12-16T12:26:01.178428543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:01.180104 containerd[1581]: time="2025-12-16T12:26:01.179870958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:26:01.180687 containerd[1581]: time="2025-12-16T12:26:01.179955677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:01.180786 kubelet[2747]: E1216 12:26:01.180352 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:01.180786 kubelet[2747]: E1216 12:26:01.180399 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:01.180786 kubelet[2747]: E1216 12:26:01.180627 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njlhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dccf794c6-mwtbf_calico-system(7dc3261b-d36f-4639-8c3f-f9eff73dc960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:01.181157 containerd[1581]: time="2025-12-16T12:26:01.180874741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:26:01.182888 kubelet[2747]: E1216 12:26:01.182849 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dccf794c6-mwtbf" podUID="7dc3261b-d36f-4639-8c3f-f9eff73dc960" Dec 16 12:26:01.387099 containerd[1581]: time="2025-12-16T12:26:01.387039054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:01.388342 containerd[1581]: time="2025-12-16T12:26:01.388286033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:26:01.388342 containerd[1581]: time="2025-12-16T12:26:01.388372232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:01.388730 kubelet[2747]: E1216 12:26:01.388576 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:01.388730 kubelet[2747]: E1216 12:26:01.388634 2747 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:26:01.389033 kubelet[2747]: E1216 12:26:01.388869 2747 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnztb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ndhz8_calico-system(179aa3f5-01af-4f0c-91ba-27b0e8267d2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:01.390221 kubelet[2747]: E1216 12:26:01.390120 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ndhz8" podUID="179aa3f5-01af-4f0c-91ba-27b0e8267d2b" Dec 16 12:26:03.731875 kubelet[2747]: E1216 12:26:03.731798 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786b7dd598-6wh88" podUID="0f73049e-3478-4f3a-8d48-04802f1162ec" Dec 16 12:26:04.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.36:22-10.0.0.1:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:04.568955 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:50438.service - OpenSSH per-connection server daemon (10.0.0.1:50438). Dec 16 12:26:04.572243 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:26:04.572367 kernel: audit: type=1130 audit(1765887964.567:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.36:22-10.0.0.1:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:04.642000 audit[5238]: USER_ACCT pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.644422 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 50438 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:26:04.645000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.648127 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:04.650581 kernel: audit: type=1101 audit(1765887964.642:882): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.650645 kernel: audit: type=1103 audit(1765887964.645:883): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.652770 kernel: audit: type=1006 audit(1765887964.645:884): pid=5238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:26:04.645000 audit[5238]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2f59920 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.645000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:26:04.657622 kernel: audit: type=1300 audit(1765887964.645:884): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2f59920 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.657688 kernel: audit: type=1327 audit(1765887964.645:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:26:04.659236 systemd-logind[1558]: New session 23 of user core. Dec 16 12:26:04.672173 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:26:04.673000 audit[5238]: USER_START pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.677000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.682842 kernel: audit: type=1105 audit(1765887964.673:885): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.683578 kernel: audit: type=1103 audit(1765887964.677:886): pid=5241 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.794538 sshd[5241]: Connection closed by 10.0.0.1 port 50438 Dec 16 12:26:04.794946 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:04.795000 audit[5238]: USER_END pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.801208 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:50438.service: Deactivated successfully. Dec 16 12:26:04.803982 kernel: audit: type=1106 audit(1765887964.795:887): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.804061 kernel: audit: type=1104 audit(1765887964.795:888): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.795000 audit[5238]: CRED_DISP pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:04.803896 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:26:04.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.36:22-10.0.0.1:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:04.805986 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:26:04.807760 systemd-logind[1558]: Removed session 23. Dec 16 12:26:05.729240 kubelet[2747]: E1216 12:26:05.728948 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"