Dec 18 11:03:48.936364 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 18 11:03:48.936402 kernel: Linux version 6.12.62-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Thu Dec 18 09:31:58 -00 2025 Dec 18 11:03:48.936413 kernel: KASLR enabled Dec 18 11:03:48.936421 kernel: efi: EFI v2.7 by EDK II Dec 18 11:03:48.936429 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 18 11:03:48.936435 kernel: random: crng init done Dec 18 11:03:48.936444 kernel: secureboot: Secure boot disabled Dec 18 11:03:48.936452 kernel: ACPI: Early table checksum verification disabled Dec 18 11:03:48.936459 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 18 11:03:48.936467 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 18 11:03:48.936481 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936491 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936498 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936505 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936515 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936524 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936532 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936541 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936549 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 18 11:03:48.936556 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 18 11:03:48.936563 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 18 11:03:48.936569 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 18 11:03:48.936578 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 18 11:03:48.936584 kernel: Zone ranges: Dec 18 11:03:48.936590 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 18 11:03:48.936599 kernel: DMA32 empty Dec 18 11:03:48.936608 kernel: Normal empty Dec 18 11:03:48.936628 kernel: Device empty Dec 18 11:03:48.936635 kernel: Movable zone start for each node Dec 18 11:03:48.936644 kernel: Early memory node ranges Dec 18 11:03:48.936651 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 18 11:03:48.936657 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 18 11:03:48.936666 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 18 11:03:48.936675 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 18 11:03:48.936681 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 18 11:03:48.936690 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 18 11:03:48.936696 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 18 11:03:48.936707 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 18 11:03:48.936717 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 18 11:03:48.936724 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 18 11:03:48.936736 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 18 11:03:48.936743 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 18 11:03:48.936753 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 18 11:03:48.936762 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 18 11:03:48.936769 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 18 11:03:48.936778 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 18 11:03:48.936787 kernel: psci: probing for conduit method from ACPI. Dec 18 11:03:48.936794 kernel: psci: PSCIv1.1 detected in firmware. Dec 18 11:03:48.936801 kernel: psci: Using standard PSCI v0.2 function IDs Dec 18 11:03:48.936810 kernel: psci: Trusted OS migration not required Dec 18 11:03:48.936819 kernel: psci: SMC Calling Convention v1.1 Dec 18 11:03:48.936827 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 18 11:03:48.936834 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 18 11:03:48.936843 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 18 11:03:48.936850 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 18 11:03:48.936857 kernel: Detected PIPT I-cache on CPU0 Dec 18 11:03:48.936866 kernel: CPU features: detected: GIC system register CPU interface Dec 18 11:03:48.936873 kernel: CPU features: detected: Spectre-v4 Dec 18 11:03:48.936880 kernel: CPU features: detected: Spectre-BHB Dec 18 11:03:48.936887 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 18 11:03:48.936894 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 18 11:03:48.936902 kernel: CPU features: detected: ARM erratum 1418040 Dec 18 11:03:48.936915 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 18 11:03:48.936922 kernel: alternatives: applying boot alternatives Dec 18 11:03:48.936930 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=67ea6b9ff80915f5d75f36be0e7ac4f75895b0a3c97fecbd2e13aec087397454 Dec 18 11:03:48.936938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 18 11:03:48.936945 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 18 11:03:48.936951 kernel: Fallback order for Node 0: 0 Dec 18 11:03:48.936960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 18 11:03:48.936967 kernel: Policy zone: DMA Dec 18 11:03:48.936973 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 18 11:03:48.936980 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 18 11:03:48.936987 kernel: software IO TLB: area num 4. Dec 18 11:03:48.936995 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 18 11:03:48.937002 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 18 11:03:48.937010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 18 11:03:48.937017 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 18 11:03:48.937025 kernel: rcu: RCU event tracing is enabled. Dec 18 11:03:48.937034 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 18 11:03:48.937042 kernel: Trampoline variant of Tasks RCU enabled. Dec 18 11:03:48.937048 kernel: Tracing variant of Tasks RCU enabled. Dec 18 11:03:48.937055 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 18 11:03:48.937062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 18 11:03:48.937069 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 18 11:03:48.937077 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 18 11:03:48.937085 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 18 11:03:48.937092 kernel: GICv3: 256 SPIs implemented Dec 18 11:03:48.937099 kernel: GICv3: 0 Extended SPIs implemented Dec 18 11:03:48.937106 kernel: Root IRQ handler: gic_handle_irq Dec 18 11:03:48.937113 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 18 11:03:48.937120 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 18 11:03:48.937127 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 18 11:03:48.937134 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 18 11:03:48.937141 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 18 11:03:48.937149 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 18 11:03:48.937157 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 18 11:03:48.937164 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 18 11:03:48.937171 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 18 11:03:48.937178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 18 11:03:48.937185 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 18 11:03:48.937192 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 18 11:03:48.937199 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 18 11:03:48.937206 kernel: arm-pv: using stolen time PV Dec 18 11:03:48.937214 kernel: Console: colour dummy device 80x25 Dec 18 11:03:48.937221 kernel: ACPI: Core revision 20240827 Dec 18 11:03:48.937230 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 18 11:03:48.937239 kernel: pid_max: default: 32768 minimum: 301 Dec 18 11:03:48.937252 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 18 11:03:48.937260 kernel: landlock: Up and running. Dec 18 11:03:48.937267 kernel: SELinux: Initializing. Dec 18 11:03:48.937274 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 18 11:03:48.937282 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 18 11:03:48.937291 kernel: rcu: Hierarchical SRCU implementation. Dec 18 11:03:48.937299 kernel: rcu: Max phase no-delay instances is 400. Dec 18 11:03:48.937307 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 18 11:03:48.937314 kernel: Remapping and enabling EFI services. Dec 18 11:03:48.937321 kernel: smp: Bringing up secondary CPUs ... Dec 18 11:03:48.937329 kernel: Detected PIPT I-cache on CPU1 Dec 18 11:03:48.937341 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 18 11:03:48.937352 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 18 11:03:48.937359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 18 11:03:48.937367 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 18 11:03:48.937376 kernel: Detected PIPT I-cache on CPU2 Dec 18 11:03:48.937383 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 18 11:03:48.937391 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 18 11:03:48.937399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 18 11:03:48.937407 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 18 11:03:48.937414 kernel: Detected PIPT I-cache on CPU3 Dec 18 11:03:48.937422 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 18 11:03:48.937430 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 18 11:03:48.937437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 18 11:03:48.937445 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 18 11:03:48.937452 kernel: smp: Brought up 1 node, 4 CPUs Dec 18 11:03:48.937462 kernel: SMP: Total of 4 processors activated. Dec 18 11:03:48.937469 kernel: CPU: All CPU(s) started at EL1 Dec 18 11:03:48.937477 kernel: CPU features: detected: 32-bit EL0 Support Dec 18 11:03:48.937485 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 18 11:03:48.937492 kernel: CPU features: detected: Common not Private translations Dec 18 11:03:48.937500 kernel: CPU features: detected: CRC32 instructions Dec 18 11:03:48.937507 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 18 11:03:48.937516 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 18 11:03:48.937524 kernel: CPU features: detected: LSE atomic instructions Dec 18 11:03:48.937531 kernel: CPU features: detected: Privileged Access Never Dec 18 11:03:48.937539 kernel: CPU features: detected: RAS Extension Support Dec 18 11:03:48.937546 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 18 11:03:48.937554 kernel: alternatives: applying system-wide alternatives Dec 18 11:03:48.937562 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 18 11:03:48.937570 kernel: Memory: 2450528K/2572288K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12736K init, 1038K bss, 99424K reserved, 16384K cma-reserved) Dec 18 11:03:48.937579 kernel: devtmpfs: initialized Dec 18 11:03:48.937586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 18 11:03:48.937594 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 18 11:03:48.937602 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 18 11:03:48.937609 kernel: 0 pages in range for non-PLT usage Dec 18 11:03:48.937667 kernel: 515088 pages in range for PLT usage Dec 18 11:03:48.937675 kernel: pinctrl core: initialized pinctrl subsystem Dec 18 11:03:48.937685 kernel: SMBIOS 3.0.0 present. Dec 18 11:03:48.937693 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 18 11:03:48.937700 kernel: DMI: Memory slots populated: 1/1 Dec 18 11:03:48.937708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 18 11:03:48.937715 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 18 11:03:48.937723 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 18 11:03:48.937731 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 18 11:03:48.937740 kernel: audit: initializing netlink subsys (disabled) Dec 18 11:03:48.937748 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Dec 18 11:03:48.937756 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 18 11:03:48.937763 kernel: cpuidle: using governor menu Dec 18 11:03:48.937771 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 18 11:03:48.937778 kernel: ASID allocator initialised with 32768 entries Dec 18 11:03:48.937786 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 18 11:03:48.937795 kernel: Serial: AMBA PL011 UART driver Dec 18 11:03:48.937803 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 18 11:03:48.937811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 18 11:03:48.937818 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 18 11:03:48.937826 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 18 11:03:48.937833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 18 11:03:48.937841 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 18 11:03:48.937848 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 18 11:03:48.937857 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 18 11:03:48.937864 kernel: ACPI: Added _OSI(Module Device) Dec 18 11:03:48.937872 kernel: ACPI: Added _OSI(Processor Device) Dec 18 11:03:48.937879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 18 11:03:48.937886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 18 11:03:48.937894 kernel: ACPI: Interpreter enabled Dec 18 11:03:48.937901 kernel: ACPI: Using GIC for interrupt routing Dec 18 11:03:48.937911 kernel: ACPI: MCFG table detected, 1 entries Dec 18 11:03:48.937918 kernel: ACPI: CPU0 has been hot-added Dec 18 11:03:48.937925 kernel: ACPI: CPU1 has been hot-added Dec 18 11:03:48.937933 kernel: ACPI: CPU2 has been hot-added Dec 18 11:03:48.937940 kernel: ACPI: CPU3 has been hot-added Dec 18 11:03:48.937948 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 18 11:03:48.937956 kernel: printk: legacy console [ttyAMA0] enabled Dec 18 11:03:48.937963 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 18 11:03:48.938152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 18 11:03:48.938281 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 18 11:03:48.938395 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 18 11:03:48.938500 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 18 11:03:48.938601 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 18 11:03:48.938626 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 18 11:03:48.938635 kernel: PCI host bridge to bus 0000:00 Dec 18 11:03:48.938759 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 18 11:03:48.938855 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 18 11:03:48.938957 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 18 11:03:48.939051 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 18 11:03:48.939176 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 18 11:03:48.939305 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 18 11:03:48.939417 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 18 11:03:48.939523 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 18 11:03:48.939636 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 18 11:03:48.939746 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 18 11:03:48.939850 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 18 11:03:48.939953 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 18 11:03:48.940049 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 18 11:03:48.940148 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 18 11:03:48.940249 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 18 11:03:48.940262 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 18 11:03:48.940270 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 18 11:03:48.940278 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 18 11:03:48.940285 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 18 11:03:48.940293 kernel: iommu: Default domain type: Translated Dec 18 11:03:48.940301 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 18 11:03:48.940309 kernel: efivars: Registered efivars operations Dec 18 11:03:48.940319 kernel: vgaarb: loaded Dec 18 11:03:48.940327 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 18 11:03:48.940335 kernel: VFS: Disk quotas dquot_6.6.0 Dec 18 11:03:48.940343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 18 11:03:48.940351 kernel: pnp: PnP ACPI init Dec 18 11:03:48.940468 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 18 11:03:48.940480 kernel: pnp: PnP ACPI: found 1 devices Dec 18 11:03:48.940490 kernel: NET: Registered PF_INET protocol family Dec 18 11:03:48.940498 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 18 11:03:48.940506 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 18 11:03:48.940518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 18 11:03:48.940526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 18 11:03:48.940536 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 18 11:03:48.940548 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 18 11:03:48.940559 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 18 11:03:48.940569 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 18 11:03:48.940577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 18 11:03:48.940585 kernel: PCI: CLS 0 bytes, default 64 Dec 18 11:03:48.940593 kernel: kvm [1]: HYP mode not available Dec 18 11:03:48.940601 kernel: Initialise system trusted keyrings Dec 18 11:03:48.940609 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 18 11:03:48.940634 kernel: Key type asymmetric registered Dec 18 11:03:48.940642 kernel: Asymmetric key parser 'x509' registered Dec 18 11:03:48.940650 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 18 11:03:48.940659 kernel: io scheduler mq-deadline registered Dec 18 11:03:48.940666 kernel: io scheduler kyber registered Dec 18 11:03:48.940674 kernel: io scheduler bfq registered Dec 18 11:03:48.940683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 18 11:03:48.940692 kernel: ACPI: button: Power Button [PWRB] Dec 18 11:03:48.940700 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 18 11:03:48.940811 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 18 11:03:48.940821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 18 11:03:48.940829 kernel: thunder_xcv, ver 1.0 Dec 18 11:03:48.940837 kernel: thunder_bgx, ver 1.0 Dec 18 11:03:48.940844 kernel: nicpf, ver 1.0 Dec 18 11:03:48.940852 kernel: nicvf, ver 1.0 Dec 18 11:03:48.940964 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 18 11:03:48.941061 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-18T11:03:47 UTC (1766055827) Dec 18 11:03:48.941072 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 18 11:03:48.941080 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 18 11:03:48.941088 kernel: watchdog: NMI not fully supported Dec 18 11:03:48.941096 kernel: watchdog: Hard watchdog permanently disabled Dec 18 11:03:48.941106 kernel: NET: Registered PF_INET6 protocol family Dec 18 11:03:48.941114 kernel: Segment Routing with IPv6 Dec 18 11:03:48.941122 kernel: In-situ OAM (IOAM) with IPv6 Dec 18 11:03:48.941131 kernel: NET: Registered PF_PACKET protocol family Dec 18 11:03:48.941138 kernel: Key type dns_resolver registered Dec 18 11:03:48.941146 kernel: registered taskstats version 1 Dec 18 11:03:48.941154 kernel: Loading compiled-in X.509 certificates Dec 18 11:03:48.941163 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.62-flatcar: d2c67436b4b1a36e4282a9a52f30eda0d32ecb9d' Dec 18 11:03:48.941171 kernel: Demotion targets for Node 0: null Dec 18 11:03:48.941179 kernel: Key type .fscrypt registered Dec 18 11:03:48.941187 kernel: Key type fscrypt-provisioning registered Dec 18 11:03:48.941196 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 18 11:03:48.941206 kernel: ima: Allocated hash algorithm: sha1 Dec 18 11:03:48.941216 kernel: ima: No architecture policies found Dec 18 11:03:48.941226 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 18 11:03:48.941234 kernel: clk: Disabling unused clocks Dec 18 11:03:48.941250 kernel: PM: genpd: Disabling unused power domains Dec 18 11:03:48.941258 kernel: Freeing unused kernel memory: 12736K Dec 18 11:03:48.941266 kernel: Run /init as init process Dec 18 11:03:48.941274 kernel: with arguments: Dec 18 11:03:48.941281 kernel: /init Dec 18 11:03:48.941288 kernel: with environment: Dec 18 11:03:48.941298 kernel: HOME=/ Dec 18 11:03:48.941305 kernel: TERM=linux Dec 18 11:03:48.941429 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 18 11:03:48.941536 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 18 11:03:48.941548 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 18 11:03:48.941558 kernel: GPT:16515071 != 27000831 Dec 18 11:03:48.941566 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 18 11:03:48.941573 kernel: GPT:16515071 != 27000831 Dec 18 11:03:48.941581 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 18 11:03:48.941588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 18 11:03:48.941596 kernel: SCSI subsystem initialized Dec 18 11:03:48.941604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 18 11:03:48.941625 kernel: device-mapper: uevent: version 1.0.3 Dec 18 11:03:48.941633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 18 11:03:48.941642 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:48.941652 kernel: raid6: neonx8 gen() 15789 MB/s Dec 18 11:03:48.941660 kernel: raid6: neonx4 gen() 15732 MB/s Dec 18 11:03:48.941667 kernel: raid6: neonx2 gen() 13176 MB/s Dec 18 11:03:48.941675 kernel: raid6: neonx1 gen() 10419 MB/s Dec 18 11:03:48.941682 kernel: raid6: int64x8 gen() 6821 MB/s Dec 18 11:03:48.941693 kernel: raid6: int64x4 gen() 7344 MB/s Dec 18 11:03:48.941700 kernel: raid6: int64x2 gen() 6106 MB/s Dec 18 11:03:48.941708 kernel: raid6: int64x1 gen() 5039 MB/s Dec 18 11:03:48.941715 kernel: raid6: using algorithm neonx8 gen() 15789 MB/s Dec 18 11:03:48.941724 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Dec 18 11:03:48.941732 kernel: raid6: using neon recovery algorithm Dec 18 11:03:48.941739 kernel: xor: measuring software checksum speed Dec 18 11:03:48.941749 kernel: 8regs : 20897 MB/sec Dec 18 11:03:48.941757 kernel: 32regs : 21676 MB/sec Dec 18 11:03:48.941765 kernel: arm64_neon : 28128 MB/sec Dec 18 11:03:48.941772 kernel: xor: using function: arm64_neon (28128 MB/sec) Dec 18 11:03:48.941780 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 18 11:03:48.941788 kernel: BTRFS: device fsid 8e84e0df-856e-4b86-9688-efc6f67e3675 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (204) Dec 18 11:03:48.941797 kernel: BTRFS info (device dm-0): first mount of filesystem 8e84e0df-856e-4b86-9688-efc6f67e3675 Dec 18 11:03:48.941807 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 18 11:03:48.941815 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 18 11:03:48.941823 kernel: BTRFS info (device dm-0): enabling free space tree Dec 18 11:03:48.941831 kernel: loop: module loaded Dec 18 11:03:48.941839 kernel: loop0: detected capacity change from 0 to 97336 Dec 18 11:03:48.941847 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 18 11:03:48.941856 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Dec 18 11:03:48.941868 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Dec 18 11:03:48.941876 systemd[1]: Successfully made /usr/ read-only. Dec 18 11:03:48.941885 systemd[1]: systemd 258.2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 18 11:03:48.941894 systemd[1]: Detected virtualization kvm. Dec 18 11:03:48.941902 systemd[1]: Detected architecture arm64. Dec 18 11:03:48.941912 systemd[1]: Running in initrd. Dec 18 11:03:48.941920 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 18 11:03:48.941928 systemd[1]: No hostname configured, using default hostname. Dec 18 11:03:48.941936 systemd[1]: Hostname set to . Dec 18 11:03:48.941944 systemd[1]: Queued start job for default target initrd.target. Dec 18 11:03:48.941953 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 18 11:03:48.941961 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 18 11:03:48.941971 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 18 11:03:48.941980 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 18 11:03:48.941989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 18 11:03:48.941997 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 18 11:03:48.942006 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 18 11:03:48.942015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 18 11:03:48.942024 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 18 11:03:48.942032 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 18 11:03:48.942040 systemd[1]: Reached target paths.target - Path Units. Dec 18 11:03:48.942048 systemd[1]: Reached target slices.target - Slice Units. Dec 18 11:03:48.942057 systemd[1]: Reached target swap.target - Swaps. Dec 18 11:03:48.942065 systemd[1]: Reached target timers.target - Timer Units. Dec 18 11:03:48.942074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 18 11:03:48.942083 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 18 11:03:48.942099 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 18 11:03:48.942109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 18 11:03:48.942118 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 18 11:03:48.942128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 18 11:03:48.942137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 18 11:03:48.942146 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 18 11:03:48.942154 systemd[1]: Reached target sockets.target - Socket Units. Dec 18 11:03:48.942163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 18 11:03:48.942172 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 18 11:03:48.942181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 18 11:03:48.942191 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 18 11:03:48.942200 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 18 11:03:48.942209 systemd[1]: Starting systemd-fsck-usr.service... Dec 18 11:03:48.942218 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 18 11:03:48.942228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 18 11:03:48.942236 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 18 11:03:48.942252 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 18 11:03:48.942262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 18 11:03:48.942270 systemd[1]: Finished systemd-fsck-usr.service. Dec 18 11:03:48.942279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 18 11:03:48.942317 systemd-journald[348]: Collecting audit messages is enabled. Dec 18 11:03:48.942337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 18 11:03:48.942346 kernel: Bridge firewalling registered Dec 18 11:03:48.942357 systemd-journald[348]: Journal started Dec 18 11:03:48.942375 systemd-journald[348]: Runtime Journal (/run/log/journal/0fb3edb6c86740df90d722ad6af62f93) is 6M, max 48.5M, 42.4M free. Dec 18 11:03:48.941156 systemd-modules-load[349]: Inserted module 'br_netfilter' Dec 18 11:03:48.950668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 18 11:03:48.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.955308 kernel: audit: type=1130 audit(1766055828.950:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.955329 systemd[1]: Started systemd-journald.service - Journal Service. Dec 18 11:03:48.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.957766 kernel: audit: type=1130 audit(1766055828.954:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.957875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 18 11:03:48.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.962519 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 18 11:03:48.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.965796 kernel: audit: type=1130 audit(1766055828.959:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.965814 kernel: audit: type=1130 audit(1766055828.963:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.967165 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 18 11:03:48.969598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 18 11:03:48.971486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 18 11:03:48.986294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 18 11:03:48.995286 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 18 11:03:48.997001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 18 11:03:48.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:48.999224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 18 11:03:49.005894 kernel: audit: type=1130 audit(1766055828.997:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.004734 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 18 11:03:49.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.010328 kernel: audit: type=1130 audit(1766055829.002:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.007683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 18 11:03:49.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.014888 kernel: audit: type=1130 audit(1766055829.006:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.013230 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 18 11:03:49.016000 audit: BPF prog-id=5 op=LOAD Dec 18 11:03:49.016633 kernel: audit: type=1130 audit(1766055829.011:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.016651 kernel: audit: type=1334 audit(1766055829.016:10): prog-id=5 op=LOAD Dec 18 11:03:49.017077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 18 11:03:49.038394 dracut-cmdline[386]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=67ea6b9ff80915f5d75f36be0e7ac4f75895b0a3c97fecbd2e13aec087397454 Dec 18 11:03:49.060854 systemd-resolved[387]: Positive Trust Anchors: Dec 18 11:03:49.061003 systemd-resolved[387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 18 11:03:49.061006 systemd-resolved[387]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 18 11:03:49.061037 systemd-resolved[387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 18 11:03:49.086845 systemd-resolved[387]: Defaulting to hostname 'linux'. Dec 18 11:03:49.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.087888 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 18 11:03:49.088986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 18 11:03:49.116645 kernel: Loading iSCSI transport class v2.0-870. Dec 18 11:03:49.125664 kernel: iscsi: registered transport (tcp) Dec 18 11:03:49.138693 kernel: iscsi: registered transport (qla4xxx) Dec 18 11:03:49.138736 kernel: QLogic iSCSI HBA Driver Dec 18 11:03:49.159524 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Dec 18 11:03:49.181767 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Dec 18 11:03:49.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.184062 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 18 11:03:49.228842 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 18 11:03:49.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.231168 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 18 11:03:49.232829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 18 11:03:49.264473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 18 11:03:49.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.265000 audit: BPF prog-id=6 op=LOAD Dec 18 11:03:49.265000 audit: BPF prog-id=7 op=LOAD Dec 18 11:03:49.267122 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 18 11:03:49.298411 systemd-udevd[627]: Using default interface naming scheme 'v258'. Dec 18 11:03:49.315714 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 18 11:03:49.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.319015 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 18 11:03:49.335327 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 18 11:03:49.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.337000 audit: BPF prog-id=8 op=LOAD Dec 18 11:03:49.338268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 18 11:03:49.342996 dracut-pre-trigger[711]: rd.md=0: removing MD RAID activation Dec 18 11:03:49.371995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 18 11:03:49.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.374031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 18 11:03:49.383203 systemd-networkd[745]: lo: Link UP Dec 18 11:03:49.383213 systemd-networkd[745]: lo: Gained carrier Dec 18 11:03:49.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.383764 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 18 11:03:49.385131 systemd[1]: Reached target network.target - Network. Dec 18 11:03:49.467177 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 18 11:03:49.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.469503 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 18 11:03:49.528283 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 18 11:03:49.537519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 18 11:03:49.559179 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 18 11:03:49.567319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 18 11:03:49.569664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 18 11:03:49.587661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 18 11:03:49.587771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 18 11:03:49.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.594568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 18 11:03:49.598390 disk-uuid[810]: Primary Header is updated. Dec 18 11:03:49.598390 disk-uuid[810]: Secondary Entries is updated. Dec 18 11:03:49.598390 disk-uuid[810]: Secondary Header is updated. Dec 18 11:03:49.597211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 18 11:03:49.599234 systemd-networkd[745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 18 11:03:49.599253 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 18 11:03:49.600376 systemd-networkd[745]: eth0: Link UP Dec 18 11:03:49.600572 systemd-networkd[745]: eth0: Gained carrier Dec 18 11:03:49.600585 systemd-networkd[745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 18 11:03:49.614217 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 18 11:03:49.629264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 18 11:03:49.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.673360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 18 11:03:49.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:49.674999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 18 11:03:49.676634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 18 11:03:49.678653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 18 11:03:49.681472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 18 11:03:49.708033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 18 11:03:49.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.628848 disk-uuid[813]: Warning: The kernel is still using the old partition table. Dec 18 11:03:50.628848 disk-uuid[813]: The new table will be used at the next reboot or after you Dec 18 11:03:50.628848 disk-uuid[813]: run partprobe(8) or kpartx(8) Dec 18 11:03:50.628848 disk-uuid[813]: The operation has completed successfully. Dec 18 11:03:50.638662 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 18 11:03:50.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.638800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 18 11:03:50.641043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 18 11:03:50.671020 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Dec 18 11:03:50.671061 kernel: BTRFS info (device vda6): first mount of filesystem 57a51d9f-a97d-47b0-9cc4-34fac8959ce9 Dec 18 11:03:50.671073 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 18 11:03:50.674635 kernel: BTRFS info (device vda6): turning on async discard Dec 18 11:03:50.674660 kernel: BTRFS info (device vda6): enabling free space tree Dec 18 11:03:50.679633 kernel: BTRFS info (device vda6): last unmount of filesystem 57a51d9f-a97d-47b0-9cc4-34fac8959ce9 Dec 18 11:03:50.680106 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 18 11:03:50.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.682341 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 18 11:03:50.694720 systemd-networkd[745]: eth0: Gained IPv6LL Dec 18 11:03:50.771755 ignition[861]: Ignition 2.24.0 Dec 18 11:03:50.771767 ignition[861]: Stage: fetch-offline Dec 18 11:03:50.771806 ignition[861]: no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:50.771816 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:50.771959 ignition[861]: parsed url from cmdline: "" Dec 18 11:03:50.771962 ignition[861]: no config URL provided Dec 18 11:03:50.771967 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Dec 18 11:03:50.771976 ignition[861]: no config at "/usr/lib/ignition/user.ign" Dec 18 11:03:50.772013 ignition[861]: op(1): [started] loading QEMU firmware config module Dec 18 11:03:50.772017 ignition[861]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 18 11:03:50.781154 ignition[861]: op(1): [finished] loading QEMU firmware config module Dec 18 11:03:50.823602 ignition[861]: parsing config with SHA512: 890c5f6239957f9c60f836872da202887a66d46f082621be90f67855c45269bbf81297bfe361e168aa20367814c262e7acd07eac293743f2ed35dd80bf768c70 Dec 18 11:03:50.829143 unknown[861]: fetched base config from "system" Dec 18 11:03:50.829681 unknown[861]: fetched user config from "qemu" Dec 18 11:03:50.830072 ignition[861]: fetch-offline: fetch-offline passed Dec 18 11:03:50.830133 ignition[861]: Ignition finished successfully Dec 18 11:03:50.832461 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 18 11:03:50.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.835829 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 18 11:03:50.836641 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 18 11:03:50.866550 ignition[874]: Ignition 2.24.0 Dec 18 11:03:50.866567 ignition[874]: Stage: kargs Dec 18 11:03:50.866723 ignition[874]: no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:50.866731 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:50.867493 ignition[874]: kargs: kargs passed Dec 18 11:03:50.867535 ignition[874]: Ignition finished successfully Dec 18 11:03:50.870603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 18 11:03:50.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.872602 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 18 11:03:50.896502 ignition[881]: Ignition 2.24.0 Dec 18 11:03:50.896520 ignition[881]: Stage: disks Dec 18 11:03:50.896692 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:50.896701 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:50.897469 ignition[881]: disks: disks passed Dec 18 11:03:50.897514 ignition[881]: Ignition finished successfully Dec 18 11:03:50.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.900137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 18 11:03:50.901319 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 18 11:03:50.902783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 18 11:03:50.904762 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 18 11:03:50.906656 systemd[1]: Reached target sysinit.target - System Initialization. Dec 18 11:03:50.908607 systemd[1]: Reached target basic.target - Basic System. Dec 18 11:03:50.911212 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 18 11:03:50.943485 systemd-fsck[891]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 18 11:03:50.948450 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 18 11:03:50.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:50.955710 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 18 11:03:51.012648 kernel: EXT4-fs (vda9): mounted filesystem 6c434b81-e9ec-4224-9573-7e5e3033c27e r/w with ordered data mode. Quota mode: none. Dec 18 11:03:51.013479 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 18 11:03:51.014736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 18 11:03:51.017843 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 18 11:03:51.020063 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 18 11:03:51.021033 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 18 11:03:51.021066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 18 11:03:51.021110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 18 11:03:51.035385 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 18 11:03:51.038011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 18 11:03:51.042634 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (899) Dec 18 11:03:51.042657 kernel: BTRFS info (device vda6): first mount of filesystem 57a51d9f-a97d-47b0-9cc4-34fac8959ce9 Dec 18 11:03:51.042674 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 18 11:03:51.045105 kernel: BTRFS info (device vda6): turning on async discard Dec 18 11:03:51.045132 kernel: BTRFS info (device vda6): enabling free space tree Dec 18 11:03:51.046680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 18 11:03:51.143832 kernel: loop1: detected capacity change from 0 to 38472 Dec 18 11:03:51.143888 kernel: loop1: p1 p2 p3 Dec 18 11:03:51.145643 kernel: loop1: p1 p2 p3 Dec 18 11:03:51.162932 kernel: erofs: (device loop1p1): mounted with root inode @ nid 40. Dec 18 11:03:51.196638 kernel: loop2: detected capacity change from 0 to 38472 Dec 18 11:03:51.197642 kernel: loop2: p1 p2 p3 Dec 18 11:03:51.207619 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:51.207653 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:51.207665 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:51.209179 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:51.209211 (sd-merge)[992]: device-mapper: reload ioctl on 4286a4ec1f248d897b3f4c0e9aec6f77bef9dc12c0204c470f1b186dba607534-verity (253:1) failed: Invalid argument Dec 18 11:03:51.219655 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:51.243407 (sd-merge)[992]: Using extensions '00-flatcar-default.raw'. Dec 18 11:03:51.244209 (sd-merge)[992]: Merged extensions into '/sysroot/etc'. Dec 18 11:03:51.245191 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Dec 18 11:03:51.249659 initrd-setup-root[999]: /etc 00-flatcar-default Thu 2025-12-18 11:03:48 UTC Dec 18 11:03:51.250374 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 18 11:03:51.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:51.253915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 18 11:03:51.255445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 18 11:03:51.279704 kernel: BTRFS info (device vda6): last unmount of filesystem 57a51d9f-a97d-47b0-9cc4-34fac8959ce9 Dec 18 11:03:51.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:51.296699 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 18 11:03:51.306759 ignition[1008]: INFO : Ignition 2.24.0 Dec 18 11:03:51.306759 ignition[1008]: INFO : Stage: mount Dec 18 11:03:51.308301 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:51.308301 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:51.308301 ignition[1008]: INFO : mount: mount passed Dec 18 11:03:51.308301 ignition[1008]: INFO : Ignition finished successfully Dec 18 11:03:51.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:51.310132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 18 11:03:51.312127 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 18 11:03:51.928050 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 18 11:03:51.929751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 18 11:03:51.958631 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Dec 18 11:03:51.958671 kernel: BTRFS info (device vda6): first mount of filesystem 57a51d9f-a97d-47b0-9cc4-34fac8959ce9 Dec 18 11:03:51.960683 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 18 11:03:51.964643 kernel: BTRFS info (device vda6): turning on async discard Dec 18 11:03:51.964678 kernel: BTRFS info (device vda6): enabling free space tree Dec 18 11:03:51.966967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 18 11:03:51.993928 ignition[1037]: INFO : Ignition 2.24.0 Dec 18 11:03:51.993928 ignition[1037]: INFO : Stage: files Dec 18 11:03:51.995635 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:51.995635 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:51.995635 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Dec 18 11:03:51.995635 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 18 11:03:51.995635 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 18 11:03:52.001263 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 18 11:03:52.001263 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 18 11:03:52.001263 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 18 11:03:52.000918 unknown[1037]: wrote ssh authorized keys file for user: core Dec 18 11:03:52.006352 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 18 11:03:52.006352 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 18 11:03:52.096491 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 18 11:03:52.226137 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 18 11:03:52.226137 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 18 11:03:52.229969 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 18 11:03:52.243121 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 18 11:03:52.243121 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 18 11:03:52.243121 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 18 11:03:52.243121 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 18 11:03:52.243121 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 18 11:03:52.637457 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 18 11:03:53.310562 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 18 11:03:53.310562 ignition[1037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 18 11:03:53.314708 ignition[1037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 18 11:03:53.316641 ignition[1037]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 18 11:03:53.345891 ignition[1037]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 18 11:03:53.349887 ignition[1037]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 18 11:03:53.352653 ignition[1037]: INFO : files: files passed Dec 18 11:03:53.352653 ignition[1037]: INFO : Ignition finished successfully Dec 18 11:03:53.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.367512 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 18 11:03:53.354344 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 18 11:03:53.368740 kernel: audit: type=1130 audit(1766055833.356:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.359326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 18 11:03:53.362401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 18 11:03:53.375022 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 18 11:03:53.375146 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 18 11:03:53.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.380214 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Dec 18 11:03:53.384038 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 18 11:03:53.384038 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 18 11:03:53.387014 kernel: audit: type=1130 audit(1766055833.376:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.387040 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 18 11:03:53.388667 kernel: audit: type=1131 audit(1766055833.376:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.388686 kernel: loop3: detected capacity change from 0 to 38472 Dec 18 11:03:53.390629 kernel: loop3: p1 p2 p3 Dec 18 11:03:53.398635 kernel: erofs: (device loop3p1): mounted with root inode @ nid 40. Dec 18 11:03:53.435647 kernel: loop4: detected capacity change from 0 to 38472 Dec 18 11:03:53.436648 kernel: loop4: p1 p2 p3 Dec 18 11:03:53.446302 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.446330 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:53.446342 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:53.447298 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:53.447974 (sd-merge)[1080]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Dec 18 11:03:53.452657 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.475645 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Dec 18 11:03:53.475871 (sd-merge)[1080]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Dec 18 11:03:53.485644 kernel: loop5: detected capacity change from 0 to 353272 Dec 18 11:03:53.487806 kernel: loop5: p1 p2 p3 Dec 18 11:03:53.489823 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Dec 18 11:03:53.499637 kernel: erofs: (device loop5p1): mounted with root inode @ nid 39. Dec 18 11:03:53.529649 kernel: loop4: detected capacity change from 0 to 161080 Dec 18 11:03:53.530633 kernel: loop4: p1 p2 p3 Dec 18 11:03:53.540640 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Dec 18 11:03:53.567659 kernel: loop6: detected capacity change from 0 to 207008 Dec 18 11:03:53.606766 kernel: loop7: detected capacity change from 0 to 353272 Dec 18 11:03:53.608652 kernel: loop7: p1 p2 p3 Dec 18 11:03:53.617779 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.617827 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:53.618790 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:53.619523 (sd-merge)[1092]: device-mapper: reload ioctl on fd2cc42b0e52d7891eecc7902f32d3fe0d587a5d379b9fa334be7716f8994b64-verity (253:2) failed: Invalid argument Dec 18 11:03:53.622749 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:53.625753 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.650642 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 18 11:03:53.652667 kernel: loop1: detected capacity change from 0 to 161080 Dec 18 11:03:53.653637 kernel: loop1: p1 p2 p3 Dec 18 11:03:53.663133 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.663168 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:53.663180 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:53.664174 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:53.664831 (sd-merge)[1092]: device-mapper: reload ioctl on c99cdab8f3e13627090305b6d86bfefcd3eebc04d59c64be5684e30760289511-verity (253:3) failed: Invalid argument Dec 18 11:03:53.670668 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:53.694646 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Dec 18 11:03:53.696633 kernel: loop3: detected capacity change from 0 to 207008 Dec 18 11:03:53.701735 (sd-merge)[1092]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.32.4-arm64.raw'. Dec 18 11:03:53.702724 (sd-merge)[1092]: Merged extensions into '/sysroot/usr'. Dec 18 11:03:53.705718 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 18 11:03:53.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.706995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 18 11:03:53.711970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 18 11:03:53.714126 kernel: audit: type=1130 audit(1766055833.706:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.735989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 18 11:03:53.736114 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 18 11:03:53.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.743512 kernel: audit: type=1130 audit(1766055833.737:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.738022 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Dec 18 11:03:53.745211 kernel: audit: type=1131 audit(1766055833.737:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.738219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 18 11:03:53.744344 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 18 11:03:53.746360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 18 11:03:53.747215 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 18 11:03:53.771699 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 18 11:03:53.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.774032 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 18 11:03:53.778172 kernel: audit: type=1130 audit(1766055833.772:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.791701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 18 11:03:53.792835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 18 11:03:53.794696 systemd[1]: Stopped target timers.target - Timer Units. Dec 18 11:03:53.796434 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 18 11:03:53.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.796546 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 18 11:03:53.799780 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 18 11:03:53.804428 kernel: audit: type=1131 audit(1766055833.797:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.802013 systemd[1]: Stopped target basic.target - Basic System. Dec 18 11:03:53.803687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 18 11:03:53.805451 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 18 11:03:53.807114 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 18 11:03:53.808872 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 18 11:03:53.810656 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 18 11:03:53.812519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 18 11:03:53.814495 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 18 11:03:53.816181 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 18 11:03:53.818007 systemd[1]: Stopped target swap.target - Swaps. Dec 18 11:03:53.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.819414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 18 11:03:53.819526 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 18 11:03:53.827437 kernel: audit: type=1131 audit(1766055833.820:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.821195 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 18 11:03:53.824850 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 18 11:03:53.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.833672 kernel: audit: type=1131 audit(1766055833.830:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.826604 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 18 11:03:53.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.826947 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 18 11:03:53.828633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 18 11:03:53.828755 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 18 11:03:53.830513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 18 11:03:53.830629 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 18 11:03:53.834868 systemd[1]: Stopped target paths.target - Path Units. Dec 18 11:03:53.836303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 18 11:03:53.836606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 18 11:03:53.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.838689 systemd[1]: Stopped target slices.target - Slice Units. Dec 18 11:03:53.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.840296 systemd[1]: Stopped target sockets.target - Socket Units. Dec 18 11:03:53.841860 systemd[1]: iscsid.socket: Deactivated successfully. Dec 18 11:03:53.841964 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 18 11:03:53.843583 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 18 11:03:53.843678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 18 11:03:53.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.845393 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 18 11:03:53.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.845468 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 18 11:03:53.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.847060 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 18 11:03:53.847174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 18 11:03:53.848971 systemd[1]: ignition-files.service: Deactivated successfully. Dec 18 11:03:53.849068 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 18 11:03:53.851794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 18 11:03:53.853948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 18 11:03:53.855496 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 18 11:03:53.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.855608 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 18 11:03:53.857388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 18 11:03:53.857487 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 18 11:03:53.859106 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 18 11:03:53.859201 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 18 11:03:53.864160 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 18 11:03:53.866650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 18 11:03:53.874830 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 18 11:03:53.878509 ignition[1123]: INFO : Ignition 2.24.0 Dec 18 11:03:53.878509 ignition[1123]: INFO : Stage: umount Dec 18 11:03:53.878509 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 18 11:03:53.878509 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 18 11:03:53.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.878582 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 18 11:03:53.884971 ignition[1123]: INFO : umount: umount passed Dec 18 11:03:53.884971 ignition[1123]: INFO : Ignition finished successfully Dec 18 11:03:53.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.879686 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 18 11:03:53.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.881004 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 18 11:03:53.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.881109 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 18 11:03:53.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.882958 systemd[1]: Stopped target network.target - Network. Dec 18 11:03:53.884248 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 18 11:03:53.884298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 18 11:03:53.885829 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 18 11:03:53.885861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 18 11:03:53.887256 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 18 11:03:53.887292 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 18 11:03:53.888750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 18 11:03:53.888783 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 18 11:03:53.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.890329 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 18 11:03:53.890365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 18 11:03:53.892070 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 18 11:03:53.893591 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 18 11:03:53.900781 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 18 11:03:53.900905 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 18 11:03:53.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.907000 audit: BPF prog-id=5 op=UNLOAD Dec 18 11:03:53.907323 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 18 11:03:53.907431 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 18 11:03:53.911034 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 18 11:03:53.912419 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 18 11:03:53.912461 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 18 11:03:53.914979 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 18 11:03:53.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.915826 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 18 11:03:53.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.915873 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 18 11:03:53.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.917839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 18 11:03:53.924000 audit: BPF prog-id=8 op=UNLOAD Dec 18 11:03:53.917873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 18 11:03:53.919545 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 18 11:03:53.919580 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 18 11:03:53.921460 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 18 11:03:53.940265 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 18 11:03:53.946797 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 18 11:03:53.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.948455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 18 11:03:53.948492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 18 11:03:53.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.950233 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 18 11:03:53.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.950272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 18 11:03:53.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.952096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 18 11:03:53.952135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 18 11:03:53.953931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 18 11:03:53.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.953970 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 18 11:03:53.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.957404 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 18 11:03:53.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.958490 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 18 11:03:53.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.958544 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Dec 18 11:03:53.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.960650 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 18 11:03:53.960690 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 18 11:03:53.962664 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 18 11:03:53.962701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 18 11:03:53.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.964840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 18 11:03:53.964877 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 18 11:03:53.966770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 18 11:03:53.966807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 18 11:03:53.969119 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 18 11:03:53.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:53.972746 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 18 11:03:53.978550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 18 11:03:53.978717 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 18 11:03:53.980666 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 18 11:03:53.982950 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 18 11:03:54.004628 systemd[1]: Switching root. Dec 18 11:03:54.037773 systemd-journald[348]: Journal stopped Dec 18 11:03:55.615426 systemd-journald[348]: Received SIGTERM from PID 1 (systemd). Dec 18 11:03:55.615485 kernel: SELinux: policy capability network_peer_controls=1 Dec 18 11:03:55.615503 kernel: SELinux: policy capability open_perms=1 Dec 18 11:03:55.615515 kernel: SELinux: policy capability extended_socket_class=1 Dec 18 11:03:55.615527 kernel: SELinux: policy capability always_check_network=0 Dec 18 11:03:55.615540 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 18 11:03:55.615554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 18 11:03:55.615564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 18 11:03:55.615575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 18 11:03:55.615585 kernel: SELinux: policy capability userspace_initial_context=0 Dec 18 11:03:55.615596 systemd[1]: Successfully loaded SELinux policy in 59.697ms. Dec 18 11:03:55.615609 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.804ms. Dec 18 11:03:55.615699 systemd[1]: systemd 258.2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 18 11:03:55.615713 systemd[1]: Detected virtualization kvm. Dec 18 11:03:55.615727 systemd[1]: Detected architecture arm64. Dec 18 11:03:55.615738 systemd[1]: Detected first boot. Dec 18 11:03:55.615750 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 18 11:03:55.615760 zram_generator::config[1172]: No configuration found. Dec 18 11:03:55.615773 kernel: NET: Registered PF_VSOCK protocol family Dec 18 11:03:55.615792 systemd[1]: Applying preset policy. Dec 18 11:03:55.615807 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Dec 18 11:03:55.615818 systemd[1]: Populated /etc with preset unit settings. Dec 18 11:03:55.615830 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Dec 18 11:03:55.615842 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 18 11:03:55.615853 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 18 11:03:55.615865 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 18 11:03:55.615876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 18 11:03:55.615887 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 18 11:03:55.615900 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 18 11:03:55.615911 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 18 11:03:55.615922 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 18 11:03:55.615934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 18 11:03:55.615945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 18 11:03:55.615956 systemd[1]: Created slice user.slice - User and Session Slice. Dec 18 11:03:55.615967 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 18 11:03:55.615978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 18 11:03:55.615990 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 18 11:03:55.616002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 18 11:03:55.616017 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 18 11:03:55.616029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 18 11:03:55.616040 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 18 11:03:55.616051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 18 11:03:55.616063 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 18 11:03:55.616076 systemd[1]: Reached target imports.target - Image Downloads. Dec 18 11:03:55.616087 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 18 11:03:55.616100 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 18 11:03:55.616113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 18 11:03:55.616124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 18 11:03:55.616135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 18 11:03:55.616146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 18 11:03:55.616158 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Dec 18 11:03:55.616169 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 18 11:03:55.616181 systemd[1]: Reached target slices.target - Slice Units. Dec 18 11:03:55.616192 systemd[1]: Reached target swap.target - Swaps. Dec 18 11:03:55.616203 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 18 11:03:55.616221 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Dec 18 11:03:55.616235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 18 11:03:55.616246 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 18 11:03:55.616257 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Dec 18 11:03:55.616271 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 18 11:03:55.616282 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 18 11:03:55.616294 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Dec 18 11:03:55.616305 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 18 11:03:55.616317 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 18 11:03:55.616328 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 18 11:03:55.616339 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Dec 18 11:03:55.616352 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Dec 18 11:03:55.616364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 18 11:03:55.616376 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Dec 18 11:03:55.616388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 18 11:03:55.616399 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 18 11:03:55.616410 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 18 11:03:55.616423 systemd[1]: Mounting media.mount - External Media Directory... Dec 18 11:03:55.616434 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 18 11:03:55.616445 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 18 11:03:55.616457 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Dec 18 11:03:55.616468 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 18 11:03:55.616479 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 18 11:03:55.616490 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Dec 18 11:03:55.616503 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 18 11:03:55.616514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 18 11:03:55.616525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 18 11:03:55.616536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 18 11:03:55.616551 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Dec 18 11:03:55.616562 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 18 11:03:55.616574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 18 11:03:55.616585 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 18 11:03:55.616596 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Dec 18 11:03:55.616608 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 18 11:03:55.616630 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 18 11:03:55.616643 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 18 11:03:55.616666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 18 11:03:55.616679 systemd[1]: Stopped systemd-fsck-usr.service. Dec 18 11:03:55.616691 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 18 11:03:55.616702 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 18 11:03:55.616713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 18 11:03:55.616726 kernel: fuse: init (API version 7.41) Dec 18 11:03:55.616737 kernel: ACPI: bus type drm_connector registered Dec 18 11:03:55.616748 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Dec 18 11:03:55.616759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 18 11:03:55.616771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 18 11:03:55.616782 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 18 11:03:55.616793 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 18 11:03:55.616807 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 18 11:03:55.616818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 18 11:03:55.616829 systemd[1]: Mounted media.mount - External Media Directory. Dec 18 11:03:55.616840 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 18 11:03:55.616879 systemd-journald[1243]: Collecting audit messages is enabled. Dec 18 11:03:55.616901 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 18 11:03:55.616914 systemd-journald[1243]: Journal started Dec 18 11:03:55.616935 systemd-journald[1243]: Runtime Journal (/run/log/journal/0fb3edb6c86740df90d722ad6af62f93) is 6M, max 48.5M, 42.4M free. Dec 18 11:03:55.476000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 18 11:03:55.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.571000 audit: BPF prog-id=12 op=UNLOAD Dec 18 11:03:55.571000 audit: BPF prog-id=11 op=UNLOAD Dec 18 11:03:55.572000 audit: BPF prog-id=13 op=LOAD Dec 18 11:03:55.572000 audit: BPF prog-id=14 op=LOAD Dec 18 11:03:55.572000 audit: BPF prog-id=15 op=LOAD Dec 18 11:03:55.613000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 18 11:03:55.613000 audit[1243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff6d0c490 a2=4000 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:03:55.613000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 18 11:03:55.371147 systemd[1]: Queued start job for default target multi-user.target. Dec 18 11:03:55.393811 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 18 11:03:55.394187 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 18 11:03:55.619267 systemd[1]: Started systemd-journald.service - Journal Service. Dec 18 11:03:55.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.620117 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 18 11:03:55.623660 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 18 11:03:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.625277 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 18 11:03:55.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.626982 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 18 11:03:55.627154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 18 11:03:55.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.628717 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 18 11:03:55.628869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 18 11:03:55.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.630104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 18 11:03:55.630273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 18 11:03:55.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.631836 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 18 11:03:55.632027 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 18 11:03:55.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.633406 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 18 11:03:55.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.634909 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Dec 18 11:03:55.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.637514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 18 11:03:55.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.640811 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 18 11:03:55.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.653488 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 18 11:03:55.655179 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 18 11:03:55.657559 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 18 11:03:55.659711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 18 11:03:55.660758 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 18 11:03:55.660788 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 18 11:03:55.662599 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 18 11:03:55.664025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 18 11:03:55.674164 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Dec 18 11:03:55.676670 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 18 11:03:55.679782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 18 11:03:55.680964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 18 11:03:55.682009 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 18 11:03:55.685403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 18 11:03:55.699706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 18 11:03:55.702422 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Dec 18 11:03:55.705826 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 18 11:03:55.707125 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 18 11:03:55.712868 systemd-journald[1243]: Time spent on flushing to /var/log/journal/0fb3edb6c86740df90d722ad6af62f93 is 18.486ms for 1068 entries. Dec 18 11:03:55.712868 systemd-journald[1243]: System Journal (/var/log/journal/0fb3edb6c86740df90d722ad6af62f93) is 8M, max 163.5M, 155.5M free. Dec 18 11:03:55.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.716116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 18 11:03:55.743391 systemd-journald[1243]: Received client request to flush runtime journal. Dec 18 11:03:55.717600 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 18 11:03:55.743565 kernel: loop4: detected capacity change from 0 to 38472 Dec 18 11:03:55.719159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 18 11:03:55.743740 kernel: loop4: p1 p2 p3 Dec 18 11:03:55.724232 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 18 11:03:55.743898 kernel: erofs: (device loop4p1): mounted with root inode @ nid 40. Dec 18 11:03:55.727777 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 18 11:03:55.732590 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Dec 18 11:03:55.732601 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Dec 18 11:03:55.734681 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Dec 18 11:03:55.745728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 18 11:03:55.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.747529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 18 11:03:55.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.753334 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 18 11:03:55.759348 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 18 11:03:55.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.761293 kernel: loop4: detected capacity change from 0 to 38472 Dec 18 11:03:55.762642 kernel: loop4: p1 p2 p3 Dec 18 11:03:55.777656 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:55.777734 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:55.777755 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:55.777772 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:55.778320 (sd-merge)[1309]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Dec 18 11:03:55.783669 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:55.791283 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 18 11:03:55.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.792000 audit: BPF prog-id=16 op=LOAD Dec 18 11:03:55.792000 audit: BPF prog-id=17 op=LOAD Dec 18 11:03:55.792000 audit: BPF prog-id=18 op=LOAD Dec 18 11:03:55.796234 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 18 11:03:55.797000 audit: BPF prog-id=19 op=LOAD Dec 18 11:03:55.798794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 18 11:03:55.799000 audit: BPF prog-id=20 op=LOAD Dec 18 11:03:55.802769 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 18 11:03:55.804710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 18 11:03:55.807412 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Dec 18 11:03:55.808000 audit: BPF prog-id=21 op=LOAD Dec 18 11:03:55.814000 audit: BPF prog-id=22 op=LOAD Dec 18 11:03:55.814000 audit: BPF prog-id=23 op=LOAD Dec 18 11:03:55.815581 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 18 11:03:55.822700 kernel: tun: Universal TUN/TAP device driver, 1.6 Dec 18 11:03:55.823098 systemd[1]: modprobe@tun.service: Deactivated successfully. Dec 18 11:03:55.823416 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Dec 18 11:03:55.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.825293 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 18 11:03:55.824000 audit: BPF prog-id=24 op=LOAD Dec 18 11:03:55.824000 audit: BPF prog-id=25 op=LOAD Dec 18 11:03:55.824000 audit: BPF prog-id=26 op=LOAD Dec 18 11:03:55.825308 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 18 11:03:55.826787 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 18 11:03:55.828582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 18 11:03:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.848486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 18 11:03:55.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.864372 systemd-nsresourced[1326]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 18 11:03:55.865410 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 18 11:03:55.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.926609 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 18 11:03:55.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.929646 systemd[1]: Reached target time-set.target - System Time Set. Dec 18 11:03:55.933531 systemd-oomd[1318]: No swap; memory pressure usage will be degraded Dec 18 11:03:55.934457 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 18 11:03:55.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.937364 systemd-resolved[1319]: Positive Trust Anchors: Dec 18 11:03:55.937448 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 18 11:03:55.937452 systemd-resolved[1319]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 18 11:03:55.937484 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 18 11:03:55.941294 systemd-resolved[1319]: Defaulting to hostname 'linux'. Dec 18 11:03:55.942459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 18 11:03:55.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:55.943654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 18 11:03:56.193694 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 18 11:03:56.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.195000 audit: BPF prog-id=27 op=LOAD Dec 18 11:03:56.195000 audit: BPF prog-id=28 op=LOAD Dec 18 11:03:56.196662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 18 11:03:56.229454 systemd-udevd[1347]: Using default interface naming scheme 'v258'. Dec 18 11:03:56.263435 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 18 11:03:56.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.265000 audit: BPF prog-id=29 op=LOAD Dec 18 11:03:56.267100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 18 11:03:56.267000 audit: BPF prog-id=7 op=UNLOAD Dec 18 11:03:56.267000 audit: BPF prog-id=6 op=UNLOAD Dec 18 11:03:56.319754 systemd-networkd[1350]: lo: Link UP Dec 18 11:03:56.319762 systemd-networkd[1350]: lo: Gained carrier Dec 18 11:03:56.320486 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 18 11:03:56.323023 systemd[1]: Reached target network.target - Network. Dec 18 11:03:56.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.326390 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 18 11:03:56.329064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 18 11:03:56.362669 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 18 11:03:56.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.365422 systemd-networkd[1350]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 18 11:03:56.365436 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 18 11:03:56.365937 systemd-networkd[1350]: eth0: Link UP Dec 18 11:03:56.366060 systemd-networkd[1350]: eth0: Gained carrier Dec 18 11:03:56.366078 systemd-networkd[1350]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 18 11:03:56.367684 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 18 11:03:56.379758 systemd-networkd[1350]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 18 11:03:56.380905 systemd-timesyncd[1320]: Network configuration changed, trying to establish connection. Dec 18 11:03:55.925489 systemd-resolved[1319]: Clock change detected. Flushing caches. Dec 18 11:03:55.925503 systemd-timesyncd[1320]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 18 11:03:55.930444 systemd-journald[1243]: Time jumped backwards, rotating. Dec 18 11:03:55.925551 systemd-timesyncd[1320]: Initial clock synchronization to Thu 2025-12-18 11:03:55.925436 UTC. Dec 18 11:03:55.941644 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 18 11:03:55.948153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 18 11:03:55.953166 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 18 11:03:55.980473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 18 11:03:55.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.048943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 18 11:03:56.073803 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Dec 18 11:03:56.074273 (sd-merge)[1309]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Dec 18 11:03:56.076505 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Dec 18 11:03:56.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.083737 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Dec 18 11:03:56.086999 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 18 11:03:56.102744 kernel: loop4: detected capacity change from 0 to 161080 Dec 18 11:03:56.103746 kernel: loop4: p1 p2 p3 Dec 18 11:03:56.104627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 18 11:03:56.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.111758 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Dec 18 11:03:56.133830 kernel: loop4: detected capacity change from 0 to 353272 Dec 18 11:03:56.135747 kernel: loop4: p1 p2 p3 Dec 18 11:03:56.143742 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Dec 18 11:03:56.164754 kernel: loop4: detected capacity change from 0 to 207008 Dec 18 11:03:56.189068 kernel: loop4: detected capacity change from 0 to 161080 Dec 18 11:03:56.189116 kernel: loop4: p1 p2 p3 Dec 18 11:03:56.204780 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:56.204863 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:56.204881 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:56.205808 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:56.208327 (sd-merge)[1418]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Dec 18 11:03:56.214746 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:56.234752 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Dec 18 11:03:56.236757 kernel: loop5: detected capacity change from 0 to 353272 Dec 18 11:03:56.236817 kernel: loop5: p1 p2 p3 Dec 18 11:03:56.238746 kernel: loop5: p1 p2 p3 Dec 18 11:03:56.248147 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:56.248206 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 18 11:03:56.248226 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Dec 18 11:03:56.249274 kernel: device-mapper: ioctl: error adding target to table Dec 18 11:03:56.249930 (sd-merge)[1418]: device-mapper: reload ioctl on loop5p1-verity (253:5) failed: Invalid argument Dec 18 11:03:56.257748 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 18 11:03:56.275743 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Dec 18 11:03:56.277738 kernel: loop6: detected capacity change from 0 to 207008 Dec 18 11:03:56.283239 (sd-merge)[1418]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Dec 18 11:03:56.286148 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 18 11:03:56.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.289552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 18 11:03:56.302753 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Dec 18 11:03:56.302800 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Dec 18 11:03:56.310891 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 18 11:03:56.310927 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 18 11:03:56.311105 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 18 11:03:56.312070 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Dec 18 11:03:56.312124 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Dec 18 11:03:56.314970 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Dec 18 11:03:56.314988 systemd-tmpfiles[1435]: Skipping /boot Dec 18 11:03:56.320513 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Dec 18 11:03:56.320530 systemd-tmpfiles[1435]: Skipping /boot Dec 18 11:03:56.330376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 18 11:03:56.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.333132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 18 11:03:56.335094 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 18 11:03:56.337231 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 18 11:03:56.345501 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 18 11:03:56.348105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 18 11:03:56.358000 audit[1445]: AUDIT1127 pid=1445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.365832 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 18 11:03:56.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.385329 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 18 11:03:56.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:03:56.390000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 18 11:03:56.390000 audit[1467]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeaf4c1a0 a2=420 a3=0 items=0 ppid=1441 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:03:56.390000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 18 11:03:56.390939 augenrules[1467]: No rules Dec 18 11:03:56.391968 systemd[1]: audit-rules.service: Deactivated successfully. Dec 18 11:03:56.392807 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 18 11:03:56.397540 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 18 11:03:56.399450 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 18 11:03:56.605375 ldconfig[1443]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 18 11:03:56.610822 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 18 11:03:56.613146 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 18 11:03:56.636671 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 18 11:03:56.637960 systemd[1]: Reached target sysinit.target - System Initialization. Dec 18 11:03:56.639965 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 18 11:03:56.641092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 18 11:03:56.642398 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 18 11:03:56.643484 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 18 11:03:56.644758 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 18 11:03:56.645934 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 18 11:03:56.646917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 18 11:03:56.648011 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 18 11:03:56.648044 systemd[1]: Reached target paths.target - Path Units. Dec 18 11:03:56.648874 systemd[1]: Reached target timers.target - Timer Units. Dec 18 11:03:56.650459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 18 11:03:56.652919 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 18 11:03:56.655480 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 18 11:03:56.664640 systemd[1]: Starting sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK)... Dec 18 11:03:56.667674 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 18 11:03:56.668950 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 18 11:03:56.670287 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Dec 18 11:03:56.671802 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Dec 18 11:03:56.673544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 18 11:03:56.674653 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 18 11:03:56.676355 systemd[1]: Reached target sockets.target - Socket Units. Dec 18 11:03:56.677281 systemd[1]: Reached target basic.target - Basic System. Dec 18 11:03:56.678191 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 18 11:03:56.679173 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 18 11:03:56.679203 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 18 11:03:56.680173 systemd[1]: Starting containerd.service - containerd container runtime... Dec 18 11:03:56.682062 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 18 11:03:56.683803 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 18 11:03:56.687668 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 18 11:03:56.689522 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 18 11:03:56.690481 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 18 11:03:56.691443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 18 11:03:56.693826 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 18 11:03:56.695746 jq[1485]: false Dec 18 11:03:56.697862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 18 11:03:56.700311 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 18 11:03:56.703754 extend-filesystems[1486]: Found /dev/vda6 Dec 18 11:03:56.704530 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 18 11:03:56.705628 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 18 11:03:56.707928 systemd[1]: Starting update-engine.service - Update Engine... Dec 18 11:03:56.710989 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 18 11:03:56.714457 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 18 11:03:56.716398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 18 11:03:56.716971 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 18 11:03:56.717229 systemd[1]: motdgen.service: Deactivated successfully. Dec 18 11:03:56.717446 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 18 11:03:56.720764 extend-filesystems[1486]: Found /dev/vda9 Dec 18 11:03:56.721740 jq[1502]: true Dec 18 11:03:56.725241 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 18 11:03:56.726784 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 18 11:03:56.731924 extend-filesystems[1486]: Checking size of /dev/vda9 Dec 18 11:03:56.737604 update_engine[1501]: I20251218 11:03:56.737136 1501 main.cc:92] Flatcar Update Engine starting Dec 18 11:03:56.740266 extend-filesystems[1486]: Resized partition /dev/vda9 Dec 18 11:03:56.746084 extend-filesystems[1531]: resize2fs 1.47.3 (8-Jul-2025) Dec 18 11:03:56.749599 jq[1517]: true Dec 18 11:03:56.749686 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 18 11:03:56.767071 dbus-daemon[1483]: [system] SELinux support is enabled Dec 18 11:03:56.767710 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 18 11:03:56.775791 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 18 11:03:56.779308 tar[1509]: linux-arm64/LICENSE Dec 18 11:03:56.779349 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 18 11:03:56.779378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 18 11:03:56.793882 tar[1509]: linux-arm64/helm Dec 18 11:03:56.793909 update_engine[1501]: I20251218 11:03:56.787967 1501 update_check_scheduler.cc:74] Next update check in 9m9s Dec 18 11:03:56.780887 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 18 11:03:56.780906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 18 11:03:56.787941 systemd[1]: Started update-engine.service - Update Engine. Dec 18 11:03:56.793417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 18 11:03:56.796953 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 18 11:03:56.796953 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 18 11:03:56.796953 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 18 11:03:56.802441 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Dec 18 11:03:56.809835 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Dec 18 11:03:56.813766 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 18 11:03:56.816279 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 18 11:03:56.816533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 18 11:03:56.819768 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 18 11:03:56.848176 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (Power Button) Dec 18 11:03:56.848870 systemd-logind[1499]: New seat seat0. Dec 18 11:03:56.849570 systemd[1]: Started systemd-logind.service - User Login Management. Dec 18 11:03:56.916845 locksmithd[1551]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 18 11:03:56.936166 containerd[1520]: time="2025-12-18T11:03:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 18 11:03:56.938734 containerd[1520]: time="2025-12-18T11:03:56.937910703Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 18 11:03:56.956138 containerd[1520]: time="2025-12-18T11:03:56.956092543Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.88µs" Dec 18 11:03:56.956739 containerd[1520]: time="2025-12-18T11:03:56.956473343Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 18 11:03:56.956739 containerd[1520]: time="2025-12-18T11:03:56.956521263Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 18 11:03:56.956739 containerd[1520]: time="2025-12-18T11:03:56.956533983Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 18 11:03:56.956739 containerd[1520]: time="2025-12-18T11:03:56.956668743Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 18 11:03:56.956739 containerd[1520]: time="2025-12-18T11:03:56.956685943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 18 11:03:56.956868 containerd[1520]: time="2025-12-18T11:03:56.956773183Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 18 11:03:56.956868 containerd[1520]: time="2025-12-18T11:03:56.956787583Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957018623Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957040143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957052023Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957060263Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957224103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957285063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957448183Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957474823Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957485143Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.957836903Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.958345663Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 18 11:03:56.958646 containerd[1520]: time="2025-12-18T11:03:56.958443303Z" level=info msg="metadata content store policy set" policy=shared Dec 18 11:03:56.962389 containerd[1520]: time="2025-12-18T11:03:56.962278463Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 18 11:03:56.962389 containerd[1520]: time="2025-12-18T11:03:56.962336663Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 18 11:03:56.962580 containerd[1520]: time="2025-12-18T11:03:56.962557863Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 18 11:03:56.962634 containerd[1520]: time="2025-12-18T11:03:56.962621623Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 18 11:03:56.962688 containerd[1520]: time="2025-12-18T11:03:56.962676343Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 18 11:03:56.962775 containerd[1520]: time="2025-12-18T11:03:56.962759903Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 18 11:03:56.962835 containerd[1520]: time="2025-12-18T11:03:56.962816543Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 18 11:03:56.962900 containerd[1520]: time="2025-12-18T11:03:56.962878743Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 18 11:03:56.962956 containerd[1520]: time="2025-12-18T11:03:56.962943023Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 18 11:03:56.963009 containerd[1520]: time="2025-12-18T11:03:56.962998223Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 18 11:03:56.963058 containerd[1520]: time="2025-12-18T11:03:56.963046783Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 18 11:03:56.963121 containerd[1520]: time="2025-12-18T11:03:56.963108103Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 18 11:03:56.963176 containerd[1520]: time="2025-12-18T11:03:56.963162623Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 18 11:03:56.963229 containerd[1520]: time="2025-12-18T11:03:56.963217063Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 18 11:03:56.963410 containerd[1520]: time="2025-12-18T11:03:56.963388703Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 18 11:03:56.963483 containerd[1520]: time="2025-12-18T11:03:56.963469663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 18 11:03:56.963535 containerd[1520]: time="2025-12-18T11:03:56.963523183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 18 11:03:56.963584 containerd[1520]: time="2025-12-18T11:03:56.963571743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 18 11:03:56.963655 containerd[1520]: time="2025-12-18T11:03:56.963641503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 18 11:03:56.963706 containerd[1520]: time="2025-12-18T11:03:56.963693743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 18 11:03:56.963783 containerd[1520]: time="2025-12-18T11:03:56.963769903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 18 11:03:56.963850 containerd[1520]: time="2025-12-18T11:03:56.963836983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 18 11:03:56.963923 containerd[1520]: time="2025-12-18T11:03:56.963908703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 18 11:03:56.963974 containerd[1520]: time="2025-12-18T11:03:56.963962863Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 18 11:03:56.964120 containerd[1520]: time="2025-12-18T11:03:56.964010423Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 18 11:03:56.964120 containerd[1520]: time="2025-12-18T11:03:56.964041303Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 18 11:03:56.964420 containerd[1520]: time="2025-12-18T11:03:56.964283743Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 18 11:03:56.964495 containerd[1520]: time="2025-12-18T11:03:56.964480183Z" level=info msg="Start snapshots syncer" Dec 18 11:03:56.965245 containerd[1520]: time="2025-12-18T11:03:56.964870223Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 18 11:03:56.965245 containerd[1520]: time="2025-12-18T11:03:56.965126423Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 18 11:03:56.965395 containerd[1520]: time="2025-12-18T11:03:56.965171863Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 18 11:03:56.965839 containerd[1520]: time="2025-12-18T11:03:56.965813303Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 18 11:03:56.966079 containerd[1520]: time="2025-12-18T11:03:56.966055903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 18 11:03:56.966171 containerd[1520]: time="2025-12-18T11:03:56.966155543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 18 11:03:56.966242 containerd[1520]: time="2025-12-18T11:03:56.966228423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 18 11:03:56.966293 containerd[1520]: time="2025-12-18T11:03:56.966281223Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 18 11:03:56.966362 containerd[1520]: time="2025-12-18T11:03:56.966348463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 18 11:03:56.966424 containerd[1520]: time="2025-12-18T11:03:56.966410463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 18 11:03:56.966481 containerd[1520]: time="2025-12-18T11:03:56.966469703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 18 11:03:56.966532 containerd[1520]: time="2025-12-18T11:03:56.966519663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 18 11:03:56.966646 containerd[1520]: time="2025-12-18T11:03:56.966580943Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 18 11:03:56.966878 containerd[1520]: time="2025-12-18T11:03:56.966857023Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 18 11:03:56.967278 containerd[1520]: time="2025-12-18T11:03:56.967256223Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 18 11:03:56.967413 containerd[1520]: time="2025-12-18T11:03:56.967394583Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967514903Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967530903Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967553103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967565703Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967638183Z" level=info msg="runtime interface created" Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967644063Z" level=info msg="created NRI interface" Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967652143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967662943Z" level=info msg="Connect containerd service" Dec 18 11:03:56.967801 containerd[1520]: time="2025-12-18T11:03:56.967687383Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 18 11:03:56.969151 containerd[1520]: time="2025-12-18T11:03:56.969096623Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 18 11:03:57.057435 containerd[1520]: time="2025-12-18T11:03:57.057377983Z" level=info msg="Start subscribing containerd event" Dec 18 11:03:57.057583 containerd[1520]: time="2025-12-18T11:03:57.057456263Z" level=info msg="Start recovering state" Dec 18 11:03:57.057793 containerd[1520]: time="2025-12-18T11:03:57.057772783Z" level=info msg="Start event monitor" Dec 18 11:03:57.057793 containerd[1520]: time="2025-12-18T11:03:57.057799663Z" level=info msg="Start cni network conf syncer for default" Dec 18 11:03:57.057793 containerd[1520]: time="2025-12-18T11:03:57.057813983Z" level=info msg="Start streaming server" Dec 18 11:03:57.058409 containerd[1520]: time="2025-12-18T11:03:57.058376143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 18 11:03:57.058476 containerd[1520]: time="2025-12-18T11:03:57.058430583Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 18 11:03:57.058582 containerd[1520]: time="2025-12-18T11:03:57.058517983Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 18 11:03:57.058582 containerd[1520]: time="2025-12-18T11:03:57.058534383Z" level=info msg="runtime interface starting up..." Dec 18 11:03:57.058582 containerd[1520]: time="2025-12-18T11:03:57.058542623Z" level=info msg="starting plugins..." Dec 18 11:03:57.058582 containerd[1520]: time="2025-12-18T11:03:57.058560823Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 18 11:03:57.060769 containerd[1520]: time="2025-12-18T11:03:57.059816503Z" level=info msg="containerd successfully booted in 0.124131s" Dec 18 11:03:57.059842 systemd[1]: Started containerd.service - containerd container runtime. Dec 18 11:03:57.086824 systemd-networkd[1350]: eth0: Gained IPv6LL Dec 18 11:03:57.093128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 18 11:03:57.095263 systemd[1]: Reached target network-online.target - Network is Online. Dec 18 11:03:57.097632 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 18 11:03:57.100917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:03:57.107515 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 18 11:03:57.122541 tar[1509]: linux-arm64/README.md Dec 18 11:03:57.132817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 18 11:03:57.134514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 18 11:03:57.135976 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 18 11:03:57.136185 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 18 11:03:57.139553 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 18 11:03:57.213348 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 18 11:03:57.232856 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 18 11:03:57.236215 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 18 11:03:57.251755 systemd[1]: issuegen.service: Deactivated successfully. Dec 18 11:03:57.252020 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 18 11:03:57.254605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 18 11:03:57.272257 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 18 11:03:57.275160 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 18 11:03:57.277309 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 18 11:03:57.278768 systemd[1]: Reached target getty.target - Login Prompts. Dec 18 11:03:57.656121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:03:57.657776 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 18 11:03:57.660279 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 18 11:03:57.660653 systemd[1]: Startup finished in 1.461s (kernel) + 6.245s (initrd) + 3.906s (userspace) = 11.612s. Dec 18 11:03:57.992139 kubelet[1625]: E1218 11:03:57.992032 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 18 11:03:57.994596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 18 11:03:57.994710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 18 11:03:58.006993 systemd[1]: kubelet.service: Consumed 743ms CPU time, 257.5M memory peak. Dec 18 11:03:59.367462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 18 11:03:59.368579 systemd[1]: Started sshd@0-1-10.0.0.27:22-10.0.0.1:60008.service - OpenSSH per-connection server daemon (10.0.0.1:60008). Dec 18 11:03:59.466562 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 60008 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:03:59.468427 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:03:59.474992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 18 11:03:59.475893 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 18 11:03:59.480040 systemd-logind[1499]: New session '1' of user 'core' with class 'user' and type 'tty'. Dec 18 11:03:59.500766 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 18 11:03:59.503060 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 18 11:03:59.524953 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:03:59.526912 systemd-logind[1499]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Dec 18 11:03:59.674868 systemd[1645]: Queued start job for default target default.target. Dec 18 11:03:59.697760 systemd[1645]: Created slice app.slice - User Application Slice. Dec 18 11:03:59.697797 systemd[1645]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 18 11:03:59.697811 systemd[1645]: Reached target machines.target - Virtual Machines and Containers. Dec 18 11:03:59.697864 systemd[1645]: Reached target paths.target - Paths. Dec 18 11:03:59.697892 systemd[1645]: Reached target timers.target - Timers. Dec 18 11:03:59.699158 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 18 11:03:59.700377 systemd[1645]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Dec 18 11:03:59.701242 systemd[1645]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 18 11:03:59.709594 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 18 11:03:59.709653 systemd[1645]: Reached target sockets.target - Sockets. Dec 18 11:03:59.711443 systemd[1645]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 18 11:03:59.711559 systemd[1645]: Reached target basic.target - Basic System. Dec 18 11:03:59.711617 systemd[1645]: Reached target default.target - Main User Target. Dec 18 11:03:59.711644 systemd[1645]: Startup finished in 179ms. Dec 18 11:03:59.711815 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 18 11:03:59.713449 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 18 11:03:59.725582 systemd[1]: Started sshd@1-2-10.0.0.27:22-10.0.0.1:60012.service - OpenSSH per-connection server daemon (10.0.0.1:60012). Dec 18 11:03:59.784660 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 60012 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:03:59.786042 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:03:59.789575 systemd-logind[1499]: New session '3' of user 'core' with class 'user' and type 'tty'. Dec 18 11:03:59.803938 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 18 11:03:59.815033 sshd[1663]: Connection closed by 10.0.0.1 port 60012 Dec 18 11:03:59.815453 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Dec 18 11:03:59.834860 systemd[1]: sshd@1-2-10.0.0.27:22-10.0.0.1:60012.service: Deactivated successfully. Dec 18 11:03:59.836497 systemd[1]: session-3.scope: Deactivated successfully. Dec 18 11:03:59.837300 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. Dec 18 11:03:59.839362 systemd[1]: Started sshd@2-4097-10.0.0.27:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Dec 18 11:03:59.840064 systemd-logind[1499]: Removed session 3. Dec 18 11:03:59.899743 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:03:59.901081 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:03:59.904669 systemd-logind[1499]: New session '4' of user 'core' with class 'user' and type 'tty'. Dec 18 11:03:59.913926 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 18 11:03:59.920966 sshd[1674]: Connection closed by 10.0.0.1 port 60028 Dec 18 11:03:59.921319 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Dec 18 11:03:59.925019 systemd[1]: sshd@2-4097-10.0.0.27:22-10.0.0.1:60028.service: Deactivated successfully. Dec 18 11:03:59.927204 systemd[1]: session-4.scope: Deactivated successfully. Dec 18 11:03:59.928821 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. Dec 18 11:03:59.930308 systemd-logind[1499]: Removed session 4. Dec 18 11:03:59.932105 systemd[1]: Started sshd@3-4098-10.0.0.27:22-10.0.0.1:60036.service - OpenSSH per-connection server daemon (10.0.0.1:60036). Dec 18 11:03:59.992685 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 60036 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:03:59.994108 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:03:59.997714 systemd-logind[1499]: New session '5' of user 'core' with class 'user' and type 'tty'. Dec 18 11:04:00.003904 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 18 11:04:00.015347 sshd[1684]: Connection closed by 10.0.0.1 port 60036 Dec 18 11:04:00.015876 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 18 11:04:00.028807 systemd[1]: sshd@3-4098-10.0.0.27:22-10.0.0.1:60036.service: Deactivated successfully. Dec 18 11:04:00.031037 systemd[1]: session-5.scope: Deactivated successfully. Dec 18 11:04:00.031791 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. Dec 18 11:04:00.033992 systemd[1]: Started sshd@4-8193-10.0.0.27:22-10.0.0.1:60048.service - OpenSSH per-connection server daemon (10.0.0.1:60048). Dec 18 11:04:00.034944 systemd-logind[1499]: Removed session 5. Dec 18 11:04:00.094417 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 60048 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:04:00.095762 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:04:00.100053 systemd-logind[1499]: New session '6' of user 'core' with class 'user' and type 'tty'. Dec 18 11:04:00.105902 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 18 11:04:00.124635 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 18 11:04:00.124919 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 18 11:04:00.136597 sudo[1695]: pam_unix(sudo:session): session closed for user root Dec 18 11:04:00.138112 sshd[1694]: Connection closed by 10.0.0.1 port 60048 Dec 18 11:04:00.138617 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Dec 18 11:04:00.151978 systemd[1]: sshd@4-8193-10.0.0.27:22-10.0.0.1:60048.service: Deactivated successfully. Dec 18 11:04:00.153787 systemd[1]: session-6.scope: Deactivated successfully. Dec 18 11:04:00.155325 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. Dec 18 11:04:00.157840 systemd[1]: Started sshd@5-3-10.0.0.27:22-10.0.0.1:60062.service - OpenSSH per-connection server daemon (10.0.0.1:60062). Dec 18 11:04:00.158414 systemd-logind[1499]: Removed session 6. Dec 18 11:04:00.215205 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 60062 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:04:00.216707 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:04:00.220580 systemd-logind[1499]: New session '7' of user 'core' with class 'user' and type 'tty'. Dec 18 11:04:00.229902 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 18 11:04:00.243240 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 18 11:04:00.243521 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 18 11:04:00.246629 sudo[1708]: pam_unix(sudo:session): session closed for user root Dec 18 11:04:00.253897 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 18 11:04:00.254148 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 18 11:04:00.261011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 18 11:04:00.296000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 18 11:04:00.298060 kernel: kauditd_printk_skb: 121 callbacks suppressed Dec 18 11:04:00.298087 kernel: audit: type=1305 audit(1766055840.296:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 18 11:04:00.298288 augenrules[1732]: No rules Dec 18 11:04:00.296000 audit[1732]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe885c5e0 a2=420 a3=0 items=0 ppid=1713 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:00.299582 systemd[1]: audit-rules.service: Deactivated successfully. Dec 18 11:04:00.300820 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 18 11:04:00.302927 kernel: audit: type=1300 audit(1766055840.296:164): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe885c5e0 a2=420 a3=0 items=0 ppid=1713 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:00.296000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 18 11:04:00.303791 sudo[1707]: pam_unix(sudo:session): session closed for user root Dec 18 11:04:00.304813 kernel: audit: type=1327 audit(1766055840.296:164): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 18 11:04:00.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.304896 kernel: audit: type=1130 audit(1766055840.300:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.307348 sshd[1706]: Connection closed by 10.0.0.1 port 60062 Dec 18 11:04:00.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.309980 kernel: audit: type=1131 audit(1766055840.300:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.310175 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Dec 18 11:04:00.303000 audit[1707]: AUDIT1106 pid=1707 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.312968 kernel: audit: type=1106 audit(1766055840.303:167): pid=1707 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.303000 audit[1707]: AUDIT1104 pid=1707 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.316079 kernel: audit: type=1104 audit(1766055840.303:168): pid=1707 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.312000 audit[1702]: AUDIT1106 pid=1702 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.316162 kernel: audit: type=1106 audit(1766055840.312:169): pid=1702 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.319238 systemd[1]: sshd@5-3-10.0.0.27:22-10.0.0.1:60062.service: Deactivated successfully. Dec 18 11:04:00.312000 audit[1702]: AUDIT1104 pid=1702 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.321080 systemd[1]: session-7.scope: Deactivated successfully. Dec 18 11:04:00.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-3-10.0.0.27:22-10.0.0.1:60062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.322875 kernel: audit: type=1104 audit(1766055840.312:170): pid=1702 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.322898 kernel: audit: type=1131 audit(1766055840.319:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-3-10.0.0.27:22-10.0.0.1:60062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.323377 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. Dec 18 11:04:00.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-4099-10.0.0.27:22-10.0.0.1:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.325527 systemd[1]: Started sshd@6-4099-10.0.0.27:22-10.0.0.1:60076.service - OpenSSH per-connection server daemon (10.0.0.1:60076). Dec 18 11:04:00.326448 systemd-logind[1499]: Removed session 7. Dec 18 11:04:00.393000 audit[1741]: AUDIT1101 pid=1741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.394358 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:04:00.396000 audit[1741]: AUDIT1103 pid=1741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.396000 audit[1741]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea71c390 a2=3 a3=0 items=0 ppid=1 pid=1741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:00.396000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:04:00.397766 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:04:00.402219 systemd-logind[1499]: New session '8' of user 'core' with class 'user' and type 'tty'. Dec 18 11:04:00.410948 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 18 11:04:00.412000 audit[1741]: AUDIT1105 pid=1741 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.414000 audit[1745]: AUDIT1103 pid=1745 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:00.423000 audit[1746]: AUDIT1101 pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.424113 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 18 11:04:00.423000 audit[1746]: AUDIT1110 pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.423000 audit[1746]: AUDIT1105 pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:00.424384 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 18 11:04:00.736128 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 18 11:04:00.748007 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 18 11:04:00.990880 dockerd[1771]: time="2025-12-18T11:04:00.990758543Z" level=info msg="Starting up" Dec 18 11:04:00.993589 dockerd[1771]: time="2025-12-18T11:04:00.993462543Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 18 11:04:01.003997 dockerd[1771]: time="2025-12-18T11:04:01.003926863Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 18 11:04:01.213974 dockerd[1771]: time="2025-12-18T11:04:01.213926183Z" level=info msg="Loading containers: start." Dec 18 11:04:01.224323 kernel: Initializing XFRM netlink socket Dec 18 11:04:01.262000 audit[1827]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.262000 audit[1827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdbdb2a40 a2=0 a3=0 items=0 ppid=1771 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 18 11:04:01.264000 audit[1829]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.264000 audit[1829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd2bfe1c0 a2=0 a3=0 items=0 ppid=1771 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.264000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 18 11:04:01.266000 audit[1831]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.266000 audit[1831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe131a540 a2=0 a3=0 items=0 ppid=1771 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 18 11:04:01.268000 audit[1833]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.268000 audit[1833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff95f12e0 a2=0 a3=0 items=0 ppid=1771 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 18 11:04:01.270000 audit[1835]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.270000 audit[1835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca3e97f0 a2=0 a3=0 items=0 ppid=1771 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 18 11:04:01.272000 audit[1837]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.272000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2811910 a2=0 a3=0 items=0 ppid=1771 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 18 11:04:01.274000 audit[1839]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.274000 audit[1839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdc6134b0 a2=0 a3=0 items=0 ppid=1771 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 18 11:04:01.275000 audit[1841]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.275000 audit[1841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc9917f70 a2=0 a3=0 items=0 ppid=1771 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 18 11:04:01.303000 audit[1844]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.303000 audit[1844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff1d27710 a2=0 a3=0 items=0 ppid=1771 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 18 11:04:01.305000 audit[1846]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.305000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc63ceb40 a2=0 a3=0 items=0 ppid=1771 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 18 11:04:01.307000 audit[1848]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.307000 audit[1848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe6991990 a2=0 a3=0 items=0 ppid=1771 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 18 11:04:01.309000 audit[1850]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.309000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd803ee70 a2=0 a3=0 items=0 ppid=1771 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 18 11:04:01.311000 audit[1852]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.311000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdece0210 a2=0 a3=0 items=0 ppid=1771 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 18 11:04:01.342000 audit[1882]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.342000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe93ee540 a2=0 a3=0 items=0 ppid=1771 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 18 11:04:01.344000 audit[1884]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.344000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff59ce190 a2=0 a3=0 items=0 ppid=1771 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 18 11:04:01.346000 audit[1886]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.346000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe874ba40 a2=0 a3=0 items=0 ppid=1771 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 18 11:04:01.347000 audit[1888]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.347000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe32de430 a2=0 a3=0 items=0 ppid=1771 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 18 11:04:01.349000 audit[1890]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.349000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe8e9bf20 a2=0 a3=0 items=0 ppid=1771 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 18 11:04:01.351000 audit[1892]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.351000 audit[1892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffed69a9a0 a2=0 a3=0 items=0 ppid=1771 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 18 11:04:01.353000 audit[1894]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.353000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe23c4220 a2=0 a3=0 items=0 ppid=1771 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 18 11:04:01.355000 audit[1896]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.355000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc1306050 a2=0 a3=0 items=0 ppid=1771 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 18 11:04:01.357000 audit[1898]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.357000 audit[1898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff080c0f0 a2=0 a3=0 items=0 ppid=1771 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 18 11:04:01.359000 audit[1900]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.359000 audit[1900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff0b05bb0 a2=0 a3=0 items=0 ppid=1771 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 18 11:04:01.360000 audit[1902]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.360000 audit[1902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe53f6a80 a2=0 a3=0 items=0 ppid=1771 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 18 11:04:01.362000 audit[1904]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.362000 audit[1904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff5cbd290 a2=0 a3=0 items=0 ppid=1771 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 18 11:04:01.363000 audit[1906]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.363000 audit[1906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffee9dc360 a2=0 a3=0 items=0 ppid=1771 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 18 11:04:01.368000 audit[1911]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.368000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff2903220 a2=0 a3=0 items=0 ppid=1771 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 18 11:04:01.370000 audit[1913]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.370000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe1bbda00 a2=0 a3=0 items=0 ppid=1771 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 18 11:04:01.372000 audit[1915]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.372000 audit[1915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe87aa690 a2=0 a3=0 items=0 ppid=1771 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 18 11:04:01.373000 audit[1917]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.373000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd44601e0 a2=0 a3=0 items=0 ppid=1771 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 18 11:04:01.375000 audit[1919]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.375000 audit[1919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd5d7bb10 a2=0 a3=0 items=0 ppid=1771 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 18 11:04:01.377000 audit[1921]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:01.377000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffecf64d40 a2=0 a3=0 items=0 ppid=1771 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.377000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 18 11:04:01.477000 audit[1925]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.477000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffeaa18b20 a2=0 a3=0 items=0 ppid=1771 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 18 11:04:01.480000 audit[1927]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.480000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd7999c20 a2=0 a3=0 items=0 ppid=1771 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 18 11:04:01.487000 audit[1935]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.487000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffee7bfd40 a2=0 a3=0 items=0 ppid=1771 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.487000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 18 11:04:01.494000 audit[1941]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.494000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff631c920 a2=0 a3=0 items=0 ppid=1771 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 18 11:04:01.497000 audit[1943]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.497000 audit[1943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdf0b7890 a2=0 a3=0 items=0 ppid=1771 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 18 11:04:01.498000 audit[1945]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.498000 audit[1945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff7763cd0 a2=0 a3=0 items=0 ppid=1771 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 18 11:04:01.500000 audit[1947]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.500000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd9375b50 a2=0 a3=0 items=0 ppid=1771 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 18 11:04:01.502000 audit[1949]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:01.502000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff47e5e00 a2=0 a3=0 items=0 ppid=1771 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:01.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 18 11:04:01.503542 systemd-networkd[1350]: docker0: Link UP Dec 18 11:04:01.507084 dockerd[1771]: time="2025-12-18T11:04:01.507041743Z" level=info msg="Loading containers: done." Dec 18 11:04:01.522667 dockerd[1771]: time="2025-12-18T11:04:01.522575103Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 18 11:04:01.522802 dockerd[1771]: time="2025-12-18T11:04:01.522707383Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 18 11:04:01.523906 dockerd[1771]: time="2025-12-18T11:04:01.523882423Z" level=info msg="Initializing buildkit" Dec 18 11:04:01.548852 dockerd[1771]: time="2025-12-18T11:04:01.548820103Z" level=info msg="Completed buildkit initialization" Dec 18 11:04:01.555382 dockerd[1771]: time="2025-12-18T11:04:01.555337743Z" level=info msg="Daemon has completed initialization" Dec 18 11:04:01.555707 dockerd[1771]: time="2025-12-18T11:04:01.555482663Z" level=info msg="API listen on /run/docker.sock" Dec 18 11:04:01.555680 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 18 11:04:01.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:02.081006 containerd[1520]: time="2025-12-18T11:04:02.080961183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Dec 18 11:04:02.620297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197522901.mount: Deactivated successfully. Dec 18 11:04:03.399786 containerd[1520]: time="2025-12-18T11:04:03.399177943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:03.401070 containerd[1520]: time="2025-12-18T11:04:03.400999183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Dec 18 11:04:03.401818 containerd[1520]: time="2025-12-18T11:04:03.401786543Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:03.405241 containerd[1520]: time="2025-12-18T11:04:03.405206703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:03.406706 containerd[1520]: time="2025-12-18T11:04:03.406647423Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.32563576s" Dec 18 11:04:03.406706 containerd[1520]: time="2025-12-18T11:04:03.406683623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Dec 18 11:04:03.407319 containerd[1520]: time="2025-12-18T11:04:03.407225543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Dec 18 11:04:04.458922 containerd[1520]: time="2025-12-18T11:04:04.458878183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:04.459399 containerd[1520]: time="2025-12-18T11:04:04.459380463Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Dec 18 11:04:04.460461 containerd[1520]: time="2025-12-18T11:04:04.460409103Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:04.462725 containerd[1520]: time="2025-12-18T11:04:04.462679543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:04.463838 containerd[1520]: time="2025-12-18T11:04:04.463812063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.05655592s" Dec 18 11:04:04.463899 containerd[1520]: time="2025-12-18T11:04:04.463841023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Dec 18 11:04:04.464226 containerd[1520]: time="2025-12-18T11:04:04.464209903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Dec 18 11:04:05.620747 containerd[1520]: time="2025-12-18T11:04:05.619986983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:05.621121 containerd[1520]: time="2025-12-18T11:04:05.621055183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Dec 18 11:04:05.621742 containerd[1520]: time="2025-12-18T11:04:05.621690303Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:05.624809 containerd[1520]: time="2025-12-18T11:04:05.624778223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:05.625657 containerd[1520]: time="2025-12-18T11:04:05.625625023Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.16139s" Dec 18 11:04:05.625657 containerd[1520]: time="2025-12-18T11:04:05.625655423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Dec 18 11:04:05.626143 containerd[1520]: time="2025-12-18T11:04:05.626121743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Dec 18 11:04:06.648546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531555511.mount: Deactivated successfully. Dec 18 11:04:07.034172 containerd[1520]: time="2025-12-18T11:04:07.034045823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:07.034801 containerd[1520]: time="2025-12-18T11:04:07.034666183Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27555003" Dec 18 11:04:07.035860 containerd[1520]: time="2025-12-18T11:04:07.035816783Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:07.037618 containerd[1520]: time="2025-12-18T11:04:07.037577943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:07.038450 containerd[1520]: time="2025-12-18T11:04:07.038425183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.41227404s" Dec 18 11:04:07.038495 containerd[1520]: time="2025-12-18T11:04:07.038457103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Dec 18 11:04:07.039053 containerd[1520]: time="2025-12-18T11:04:07.039031503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 18 11:04:07.482982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941995781.mount: Deactivated successfully. Dec 18 11:04:08.245287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 18 11:04:08.246899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:04:08.257745 containerd[1520]: time="2025-12-18T11:04:08.257657583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:08.258434 containerd[1520]: time="2025-12-18T11:04:08.258385303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Dec 18 11:04:08.259421 containerd[1520]: time="2025-12-18T11:04:08.259357823Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:08.262765 containerd[1520]: time="2025-12-18T11:04:08.261866983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:08.263070 containerd[1520]: time="2025-12-18T11:04:08.263038983Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.2239754s" Dec 18 11:04:08.263117 containerd[1520]: time="2025-12-18T11:04:08.263070703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 18 11:04:08.263755 containerd[1520]: time="2025-12-18T11:04:08.263729863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 18 11:04:08.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:08.390932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:08.391787 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 18 11:04:08.391821 kernel: audit: type=1130 audit(1766055848.390:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:08.395399 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 18 11:04:08.433650 kubelet[2128]: E1218 11:04:08.433574 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 18 11:04:08.436367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 18 11:04:08.436465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 18 11:04:08.437942 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.1M memory peak. Dec 18 11:04:08.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 18 11:04:08.440783 kernel: audit: type=1131 audit(1766055848.437:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 18 11:04:08.779263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780150975.mount: Deactivated successfully. Dec 18 11:04:08.784339 containerd[1520]: time="2025-12-18T11:04:08.784287063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 18 11:04:08.785073 containerd[1520]: time="2025-12-18T11:04:08.785025943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 18 11:04:08.785896 containerd[1520]: time="2025-12-18T11:04:08.785853063Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 18 11:04:08.788648 containerd[1520]: time="2025-12-18T11:04:08.788602743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 18 11:04:08.789240 containerd[1520]: time="2025-12-18T11:04:08.789060023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.30064ms" Dec 18 11:04:08.789240 containerd[1520]: time="2025-12-18T11:04:08.789086903Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 18 11:04:08.789517 containerd[1520]: time="2025-12-18T11:04:08.789492503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 18 11:04:09.232596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995891170.mount: Deactivated successfully. Dec 18 11:04:11.372553 containerd[1520]: time="2025-12-18T11:04:11.372464263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:11.373370 containerd[1520]: time="2025-12-18T11:04:11.373305703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Dec 18 11:04:11.374033 containerd[1520]: time="2025-12-18T11:04:11.373995783Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:11.376831 containerd[1520]: time="2025-12-18T11:04:11.376789943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:11.378623 containerd[1520]: time="2025-12-18T11:04:11.378584703Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.58900856s" Dec 18 11:04:11.378661 containerd[1520]: time="2025-12-18T11:04:11.378624063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 18 11:04:16.073452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:16.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:16.073608 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.1M memory peak. Dec 18 11:04:16.075632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:04:16.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:16.078414 kernel: audit: type=1130 audit(1766055856.072:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:16.078506 kernel: audit: type=1131 audit(1766055856.072:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:16.099220 systemd[1]: Reload requested from client PID 2226 ('systemctl') (unit session-8.scope)... Dec 18 11:04:16.099255 systemd[1]: Reloading... Dec 18 11:04:16.187763 zram_generator::config[2281]: No configuration found. Dec 18 11:04:16.381214 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Dec 18 11:04:16.454036 systemd[1]: Reloading finished in 354 ms. Dec 18 11:04:16.460000 audit: BPF prog-id=33 op=LOAD Dec 18 11:04:16.461000 audit: BPF prog-id=19 op=UNLOAD Dec 18 11:04:16.461000 audit: BPF prog-id=34 op=LOAD Dec 18 11:04:16.464123 kernel: audit: type=1334 audit(1766055856.460:226): prog-id=33 op=LOAD Dec 18 11:04:16.464155 kernel: audit: type=1334 audit(1766055856.461:227): prog-id=19 op=UNLOAD Dec 18 11:04:16.464170 kernel: audit: type=1334 audit(1766055856.461:228): prog-id=34 op=LOAD Dec 18 11:04:16.461000 audit: BPF prog-id=20 op=UNLOAD Dec 18 11:04:16.464978 kernel: audit: type=1334 audit(1766055856.461:229): prog-id=20 op=UNLOAD Dec 18 11:04:16.462000 audit: BPF prog-id=35 op=LOAD Dec 18 11:04:16.465759 kernel: audit: type=1334 audit(1766055856.462:230): prog-id=35 op=LOAD Dec 18 11:04:16.462000 audit: BPF prog-id=24 op=UNLOAD Dec 18 11:04:16.463000 audit: BPF prog-id=36 op=LOAD Dec 18 11:04:16.467391 kernel: audit: type=1334 audit(1766055856.462:231): prog-id=24 op=UNLOAD Dec 18 11:04:16.467437 kernel: audit: type=1334 audit(1766055856.463:232): prog-id=36 op=LOAD Dec 18 11:04:16.463000 audit: BPF prog-id=37 op=LOAD Dec 18 11:04:16.468237 kernel: audit: type=1334 audit(1766055856.463:233): prog-id=37 op=LOAD Dec 18 11:04:16.463000 audit: BPF prog-id=25 op=UNLOAD Dec 18 11:04:16.463000 audit: BPF prog-id=26 op=UNLOAD Dec 18 11:04:16.464000 audit: BPF prog-id=38 op=LOAD Dec 18 11:04:16.464000 audit: BPF prog-id=29 op=UNLOAD Dec 18 11:04:16.465000 audit: BPF prog-id=39 op=LOAD Dec 18 11:04:16.465000 audit: BPF prog-id=30 op=UNLOAD Dec 18 11:04:16.466000 audit: BPF prog-id=40 op=LOAD Dec 18 11:04:16.466000 audit: BPF prog-id=41 op=LOAD Dec 18 11:04:16.466000 audit: BPF prog-id=31 op=UNLOAD Dec 18 11:04:16.466000 audit: BPF prog-id=32 op=UNLOAD Dec 18 11:04:16.467000 audit: BPF prog-id=42 op=LOAD Dec 18 11:04:16.467000 audit: BPF prog-id=21 op=UNLOAD Dec 18 11:04:16.467000 audit: BPF prog-id=43 op=LOAD Dec 18 11:04:16.467000 audit: BPF prog-id=44 op=LOAD Dec 18 11:04:16.467000 audit: BPF prog-id=22 op=UNLOAD Dec 18 11:04:16.467000 audit: BPF prog-id=23 op=UNLOAD Dec 18 11:04:16.468000 audit: BPF prog-id=45 op=LOAD Dec 18 11:04:16.468000 audit: BPF prog-id=16 op=UNLOAD Dec 18 11:04:16.468000 audit: BPF prog-id=46 op=LOAD Dec 18 11:04:16.468000 audit: BPF prog-id=47 op=LOAD Dec 18 11:04:16.468000 audit: BPF prog-id=17 op=UNLOAD Dec 18 11:04:16.468000 audit: BPF prog-id=18 op=UNLOAD Dec 18 11:04:16.469000 audit: BPF prog-id=48 op=LOAD Dec 18 11:04:16.469000 audit: BPF prog-id=49 op=LOAD Dec 18 11:04:16.469000 audit: BPF prog-id=27 op=UNLOAD Dec 18 11:04:16.469000 audit: BPF prog-id=28 op=UNLOAD Dec 18 11:04:16.471000 audit: BPF prog-id=50 op=LOAD Dec 18 11:04:16.471000 audit: BPF prog-id=13 op=UNLOAD Dec 18 11:04:16.471000 audit: BPF prog-id=51 op=LOAD Dec 18 11:04:16.472000 audit: BPF prog-id=52 op=LOAD Dec 18 11:04:16.472000 audit: BPF prog-id=14 op=UNLOAD Dec 18 11:04:16.472000 audit: BPF prog-id=15 op=UNLOAD Dec 18 11:04:16.491315 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 18 11:04:16.491376 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 18 11:04:16.491638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:16.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 18 11:04:16.491704 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95.3M memory peak. Dec 18 11:04:16.494193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:04:16.629252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:16.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:16.633915 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 18 11:04:16.665349 kubelet[2323]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 18 11:04:16.665349 kubelet[2323]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 18 11:04:16.665349 kubelet[2323]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 18 11:04:16.665711 kubelet[2323]: I1218 11:04:16.665400 2323 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 18 11:04:17.740409 kubelet[2323]: I1218 11:04:17.740361 2323 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 18 11:04:17.740409 kubelet[2323]: I1218 11:04:17.740392 2323 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 18 11:04:17.740846 kubelet[2323]: I1218 11:04:17.740646 2323 server.go:954] "Client rotation is on, will bootstrap in background" Dec 18 11:04:17.763436 kubelet[2323]: E1218 11:04:17.763371 2323 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Dec 18 11:04:17.763982 kubelet[2323]: I1218 11:04:17.763907 2323 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 18 11:04:17.770378 kubelet[2323]: I1218 11:04:17.770346 2323 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 18 11:04:17.773453 kubelet[2323]: I1218 11:04:17.773423 2323 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 18 11:04:17.774110 kubelet[2323]: I1218 11:04:17.774054 2323 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 18 11:04:17.774299 kubelet[2323]: I1218 11:04:17.774105 2323 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 18 11:04:17.774394 kubelet[2323]: I1218 11:04:17.774358 2323 topology_manager.go:138] "Creating topology manager with none policy" Dec 18 11:04:17.774394 kubelet[2323]: I1218 11:04:17.774368 2323 container_manager_linux.go:304] "Creating device plugin manager" Dec 18 11:04:17.774599 kubelet[2323]: I1218 11:04:17.774570 2323 state_mem.go:36] "Initialized new in-memory state store" Dec 18 11:04:17.777044 kubelet[2323]: I1218 11:04:17.777010 2323 kubelet.go:446] "Attempting to sync node with API server" Dec 18 11:04:17.777044 kubelet[2323]: I1218 11:04:17.777039 2323 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 18 11:04:17.777108 kubelet[2323]: I1218 11:04:17.777066 2323 kubelet.go:352] "Adding apiserver pod source" Dec 18 11:04:17.777108 kubelet[2323]: I1218 11:04:17.777078 2323 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 18 11:04:17.780010 kubelet[2323]: W1218 11:04:17.779934 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 18 11:04:17.780010 kubelet[2323]: E1218 11:04:17.780006 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Dec 18 11:04:17.780876 kubelet[2323]: W1218 11:04:17.780833 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 18 11:04:17.781024 kubelet[2323]: I1218 11:04:17.780944 2323 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 18 11:04:17.781024 kubelet[2323]: E1218 11:04:17.780997 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Dec 18 11:04:17.781710 kubelet[2323]: I1218 11:04:17.781677 2323 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 18 11:04:17.781830 kubelet[2323]: W1218 11:04:17.781815 2323 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 18 11:04:17.785443 kubelet[2323]: I1218 11:04:17.784501 2323 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 18 11:04:17.785443 kubelet[2323]: I1218 11:04:17.784542 2323 server.go:1287] "Started kubelet" Dec 18 11:04:17.786275 kubelet[2323]: I1218 11:04:17.786245 2323 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 18 11:04:17.786363 kubelet[2323]: I1218 11:04:17.786288 2323 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 18 11:04:17.787395 kubelet[2323]: I1218 11:04:17.787056 2323 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 18 11:04:17.787467 kubelet[2323]: I1218 11:04:17.787416 2323 server.go:479] "Adding debug handlers to kubelet server" Dec 18 11:04:17.790330 kubelet[2323]: I1218 11:04:17.790274 2323 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 18 11:04:17.790399 kubelet[2323]: I1218 11:04:17.790352 2323 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 18 11:04:17.791442 kubelet[2323]: E1218 11:04:17.791358 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Dec 18 11:04:17.791677 kubelet[2323]: E1218 11:04:17.791460 2323 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18824a7218be85ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-18 11:04:17.784522223 +0000 UTC m=+1.147623281,LastTimestamp:2025-12-18 11:04:17.784522223 +0000 UTC m=+1.147623281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 18 11:04:17.791964 kubelet[2323]: I1218 11:04:17.791925 2323 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 18 11:04:17.792004 kubelet[2323]: E1218 11:04:17.791963 2323 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 18 11:04:17.792032 kubelet[2323]: I1218 11:04:17.792019 2323 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 18 11:04:17.792078 kubelet[2323]: I1218 11:04:17.792065 2323 reconciler.go:26] "Reconciler: start to sync state" Dec 18 11:04:17.792632 kubelet[2323]: W1218 11:04:17.792399 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 18 11:04:17.792632 kubelet[2323]: E1218 11:04:17.792458 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Dec 18 11:04:17.792873 kubelet[2323]: E1218 11:04:17.792856 2323 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 18 11:04:17.793104 kubelet[2323]: I1218 11:04:17.793040 2323 factory.go:221] Registration of the systemd container factory successfully Dec 18 11:04:17.793183 kubelet[2323]: I1218 11:04:17.793168 2323 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 18 11:04:17.794251 kubelet[2323]: I1218 11:04:17.794219 2323 factory.go:221] Registration of the containerd container factory successfully Dec 18 11:04:17.794000 audit[2337]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.794000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd8b63040 a2=0 a3=0 items=0 ppid=2323 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 18 11:04:17.795000 audit[2340]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.795000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8dc8e10 a2=0 a3=0 items=0 ppid=2323 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 18 11:04:17.797000 audit[2342]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.797000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd48a7530 a2=0 a3=0 items=0 ppid=2323 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 18 11:04:17.800000 audit[2344]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.800000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd2167f30 a2=0 a3=0 items=0 ppid=2323 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 18 11:04:17.805000 audit[2349]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.805000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe28e8210 a2=0 a3=0 items=0 ppid=2323 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 18 11:04:17.806845 kubelet[2323]: I1218 11:04:17.806805 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 18 11:04:17.806000 audit[2350]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:17.806000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdef4d730 a2=0 a3=0 items=0 ppid=2323 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 18 11:04:17.807000 audit[2351]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.807000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc007f610 a2=0 a3=0 items=0 ppid=2323 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 18 11:04:17.808521 kubelet[2323]: I1218 11:04:17.808477 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 18 11:04:17.808521 kubelet[2323]: I1218 11:04:17.808519 2323 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 18 11:04:17.808587 kubelet[2323]: I1218 11:04:17.808536 2323 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 18 11:04:17.808587 kubelet[2323]: I1218 11:04:17.808542 2323 kubelet.go:2382] "Starting kubelet main sync loop" Dec 18 11:04:17.808633 kubelet[2323]: E1218 11:04:17.808596 2323 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 18 11:04:17.809631 kubelet[2323]: I1218 11:04:17.808961 2323 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 18 11:04:17.809631 kubelet[2323]: I1218 11:04:17.808976 2323 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 18 11:04:17.809631 kubelet[2323]: I1218 11:04:17.808994 2323 state_mem.go:36] "Initialized new in-memory state store" Dec 18 11:04:17.809631 kubelet[2323]: W1218 11:04:17.809084 2323 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 18 11:04:17.809631 kubelet[2323]: E1218 11:04:17.809115 2323 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Dec 18 11:04:17.809000 audit[2352]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.809000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd24933f0 a2=0 a3=0 items=0 ppid=2323 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 18 11:04:17.809000 audit[2353]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:17.809000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd08d56e0 a2=0 a3=0 items=0 ppid=2323 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 18 11:04:17.810000 audit[2356]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:17.810000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe85fdd80 a2=0 a3=0 items=0 ppid=2323 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 18 11:04:17.810000 audit[2355]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:17.810000 audit[2355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3b97740 a2=0 a3=0 items=0 ppid=2323 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 18 11:04:17.811000 audit[2357]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:17.811000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf0d4940 a2=0 a3=0 items=0 ppid=2323 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:17.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 18 11:04:17.892965 kubelet[2323]: E1218 11:04:17.892925 2323 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 18 11:04:17.892965 kubelet[2323]: I1218 11:04:17.892956 2323 policy_none.go:49] "None policy: Start" Dec 18 11:04:17.893080 kubelet[2323]: I1218 11:04:17.892985 2323 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 18 11:04:17.893080 kubelet[2323]: I1218 11:04:17.892999 2323 state_mem.go:35] "Initializing new in-memory state store" Dec 18 11:04:17.899753 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 18 11:04:17.909256 kubelet[2323]: E1218 11:04:17.909153 2323 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 18 11:04:17.913339 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 18 11:04:17.937484 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 18 11:04:17.939191 kubelet[2323]: I1218 11:04:17.938836 2323 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 18 11:04:17.939191 kubelet[2323]: I1218 11:04:17.939045 2323 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 18 11:04:17.939191 kubelet[2323]: I1218 11:04:17.939057 2323 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 18 11:04:17.939808 kubelet[2323]: I1218 11:04:17.939267 2323 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 18 11:04:17.940966 kubelet[2323]: E1218 11:04:17.940943 2323 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 18 11:04:17.941063 kubelet[2323]: E1218 11:04:17.940983 2323 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 18 11:04:17.992921 kubelet[2323]: E1218 11:04:17.992796 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Dec 18 11:04:18.041292 kubelet[2323]: I1218 11:04:18.041252 2323 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 18 11:04:18.041722 kubelet[2323]: E1218 11:04:18.041694 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 18 11:04:18.118346 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Dec 18 11:04:18.137569 kubelet[2323]: E1218 11:04:18.137512 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.141109 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Dec 18 11:04:18.142700 kubelet[2323]: E1218 11:04:18.142680 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.157680 systemd[1]: Created slice kubepods-burstable-pod5a9d528305958a399919a7f1a8d9296b.slice - libcontainer container kubepods-burstable-pod5a9d528305958a399919a7f1a8d9296b.slice. Dec 18 11:04:18.159325 kubelet[2323]: E1218 11:04:18.159302 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.194705 kubelet[2323]: I1218 11:04:18.194670 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:18.194781 kubelet[2323]: I1218 11:04:18.194730 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:18.194781 kubelet[2323]: I1218 11:04:18.194752 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:18.194847 kubelet[2323]: I1218 11:04:18.194768 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:18.194847 kubelet[2323]: I1218 11:04:18.194831 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:18.195013 kubelet[2323]: I1218 11:04:18.194846 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:18.195013 kubelet[2323]: I1218 11:04:18.194861 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:18.195013 kubelet[2323]: I1218 11:04:18.194877 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Dec 18 11:04:18.195013 kubelet[2323]: I1218 11:04:18.194934 2323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:18.243937 kubelet[2323]: I1218 11:04:18.243837 2323 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 18 11:04:18.244569 kubelet[2323]: E1218 11:04:18.244367 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 18 11:04:18.394226 kubelet[2323]: E1218 11:04:18.394185 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Dec 18 11:04:18.438594 kubelet[2323]: E1218 11:04:18.438556 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.439304 containerd[1520]: time="2025-12-18T11:04:18.439269463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:18.443464 kubelet[2323]: E1218 11:04:18.443439 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.443842 containerd[1520]: time="2025-12-18T11:04:18.443806183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:18.460744 kubelet[2323]: E1218 11:04:18.460197 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.460824 containerd[1520]: time="2025-12-18T11:04:18.460632463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5a9d528305958a399919a7f1a8d9296b,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:18.464698 containerd[1520]: time="2025-12-18T11:04:18.464652423Z" level=info msg="connecting to shim 68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f" address="unix:///run/containerd/s/4bd13c2a73e2ef5cc6a2214ca84d869ea31e7ab1e92ae198e7c88a10ae54c662" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:18.468998 containerd[1520]: time="2025-12-18T11:04:18.468456103Z" level=info msg="connecting to shim 0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38" address="unix:///run/containerd/s/75772ac2a01fd3444790f6e584cf6f3354665c779662cbc9b1699b205445e388" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:18.491033 containerd[1520]: time="2025-12-18T11:04:18.490982543Z" level=info msg="connecting to shim 6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5" address="unix:///run/containerd/s/80ff11dd0f999582652a73a078ac8bb425cab757976c007b1c404283bfa43fd9" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:18.499927 systemd[1]: Started cri-containerd-68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f.scope - libcontainer container 68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f. Dec 18 11:04:18.503869 systemd[1]: Started cri-containerd-0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38.scope - libcontainer container 0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38. Dec 18 11:04:18.513933 systemd[1]: Started cri-containerd-6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5.scope - libcontainer container 6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5. Dec 18 11:04:18.518000 audit: BPF prog-id=53 op=LOAD Dec 18 11:04:18.519000 audit: BPF prog-id=54 op=LOAD Dec 18 11:04:18.519000 audit: BPF prog-id=55 op=LOAD Dec 18 11:04:18.519000 audit[2403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=55 op=UNLOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=56 op=LOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=57 op=LOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=57 op=UNLOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=58 op=LOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=58 op=UNLOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=56 op=UNLOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=59 op=LOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=60 op=LOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=60 op=UNLOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=59 op=UNLOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.520000 audit: BPF prog-id=61 op=LOAD Dec 18 11:04:18.520000 audit[2403]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2384 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343564373535306131643832333461343533623938333862333062 Dec 18 11:04:18.520000 audit: BPF prog-id=62 op=LOAD Dec 18 11:04:18.520000 audit[2396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638363134623935386330663637623036636539393135313839313333 Dec 18 11:04:18.524000 audit: BPF prog-id=63 op=LOAD Dec 18 11:04:18.525000 audit: BPF prog-id=64 op=LOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=64 op=UNLOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=65 op=LOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=66 op=LOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=66 op=UNLOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=65 op=UNLOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.525000 audit: BPF prog-id=67 op=LOAD Dec 18 11:04:18.525000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2429 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666616136626561643036306164393666613731303066666332313135 Dec 18 11:04:18.547324 containerd[1520]: time="2025-12-18T11:04:18.547199503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f\"" Dec 18 11:04:18.548486 kubelet[2323]: E1218 11:04:18.548393 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.552145 containerd[1520]: time="2025-12-18T11:04:18.552092063Z" level=info msg="CreateContainer within sandbox \"68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 18 11:04:18.558453 containerd[1520]: time="2025-12-18T11:04:18.558415263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5a9d528305958a399919a7f1a8d9296b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5\"" Dec 18 11:04:18.559641 kubelet[2323]: E1218 11:04:18.559614 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.560409 containerd[1520]: time="2025-12-18T11:04:18.560376463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38\"" Dec 18 11:04:18.561008 kubelet[2323]: E1218 11:04:18.560983 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.561701 containerd[1520]: time="2025-12-18T11:04:18.561610423Z" level=info msg="CreateContainer within sandbox \"6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 18 11:04:18.562431 containerd[1520]: time="2025-12-18T11:04:18.562398143Z" level=info msg="CreateContainer within sandbox \"0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 18 11:04:18.564010 containerd[1520]: time="2025-12-18T11:04:18.563974663Z" level=info msg="Container d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:18.572768 containerd[1520]: time="2025-12-18T11:04:18.572149303Z" level=info msg="Container 61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:18.576181 containerd[1520]: time="2025-12-18T11:04:18.576136263Z" level=info msg="Container a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:18.576969 containerd[1520]: time="2025-12-18T11:04:18.576926823Z" level=info msg="CreateContainer within sandbox \"68614b958c0f67b06ce9915189133ebf6a2fa39f4a3938fd790c74273dca2b5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f\"" Dec 18 11:04:18.578276 containerd[1520]: time="2025-12-18T11:04:18.578244103Z" level=info msg="StartContainer for \"d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f\"" Dec 18 11:04:18.578792 containerd[1520]: time="2025-12-18T11:04:18.578708063Z" level=info msg="CreateContainer within sandbox \"0045d7550a1d8234a453b9838b30bf7a993d76fb65f6854444db110124e87f38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32\"" Dec 18 11:04:18.579203 containerd[1520]: time="2025-12-18T11:04:18.579178743Z" level=info msg="StartContainer for \"61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32\"" Dec 18 11:04:18.580267 containerd[1520]: time="2025-12-18T11:04:18.580189783Z" level=info msg="connecting to shim d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f" address="unix:///run/containerd/s/4bd13c2a73e2ef5cc6a2214ca84d869ea31e7ab1e92ae198e7c88a10ae54c662" protocol=ttrpc version=3 Dec 18 11:04:18.580324 containerd[1520]: time="2025-12-18T11:04:18.580280343Z" level=info msg="connecting to shim 61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32" address="unix:///run/containerd/s/75772ac2a01fd3444790f6e584cf6f3354665c779662cbc9b1699b205445e388" protocol=ttrpc version=3 Dec 18 11:04:18.585494 containerd[1520]: time="2025-12-18T11:04:18.585442863Z" level=info msg="CreateContainer within sandbox \"6faa6bead060ad96fa7100ffc211599397c75169eca5bc7e184c5c0e74977ae5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f\"" Dec 18 11:04:18.586042 containerd[1520]: time="2025-12-18T11:04:18.586014263Z" level=info msg="StartContainer for \"a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f\"" Dec 18 11:04:18.587160 containerd[1520]: time="2025-12-18T11:04:18.587131543Z" level=info msg="connecting to shim a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f" address="unix:///run/containerd/s/80ff11dd0f999582652a73a078ac8bb425cab757976c007b1c404283bfa43fd9" protocol=ttrpc version=3 Dec 18 11:04:18.599910 systemd[1]: Started cri-containerd-61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32.scope - libcontainer container 61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32. Dec 18 11:04:18.603446 systemd[1]: Started cri-containerd-d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f.scope - libcontainer container d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f. Dec 18 11:04:18.609440 systemd[1]: Started cri-containerd-a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f.scope - libcontainer container a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f. Dec 18 11:04:18.611000 audit: BPF prog-id=68 op=LOAD Dec 18 11:04:18.612000 audit: BPF prog-id=69 op=LOAD Dec 18 11:04:18.612000 audit[2499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.612000 audit: BPF prog-id=69 op=UNLOAD Dec 18 11:04:18.612000 audit[2499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.613000 audit: BPF prog-id=70 op=LOAD Dec 18 11:04:18.613000 audit[2499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.613000 audit: BPF prog-id=71 op=LOAD Dec 18 11:04:18.613000 audit[2499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.613000 audit: BPF prog-id=71 op=UNLOAD Dec 18 11:04:18.613000 audit[2499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.613000 audit: BPF prog-id=70 op=UNLOAD Dec 18 11:04:18.613000 audit[2499]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.613000 audit: BPF prog-id=72 op=LOAD Dec 18 11:04:18.613000 audit[2499]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2384 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643731613866306431646532353934656633623136623236373638 Dec 18 11:04:18.621000 audit: BPF prog-id=73 op=LOAD Dec 18 11:04:18.622000 audit: BPF prog-id=74 op=LOAD Dec 18 11:04:18.622000 audit[2498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.622000 audit: BPF prog-id=74 op=UNLOAD Dec 18 11:04:18.622000 audit[2498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.623000 audit: BPF prog-id=75 op=LOAD Dec 18 11:04:18.623000 audit[2498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.623000 audit: BPF prog-id=76 op=LOAD Dec 18 11:04:18.623000 audit[2498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.623000 audit: BPF prog-id=76 op=UNLOAD Dec 18 11:04:18.623000 audit[2498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.623000 audit: BPF prog-id=75 op=UNLOAD Dec 18 11:04:18.623000 audit[2498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.623000 audit: BPF prog-id=77 op=LOAD Dec 18 11:04:18.623000 audit[2498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353634626637366532366435383338656236383263313131656437 Dec 18 11:04:18.624000 audit: BPF prog-id=78 op=LOAD Dec 18 11:04:18.624000 audit: BPF prog-id=79 op=LOAD Dec 18 11:04:18.624000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.625000 audit: BPF prog-id=79 op=UNLOAD Dec 18 11:04:18.625000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.625000 audit: BPF prog-id=80 op=LOAD Dec 18 11:04:18.625000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.625000 audit: BPF prog-id=81 op=LOAD Dec 18 11:04:18.625000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.625000 audit: BPF prog-id=81 op=UNLOAD Dec 18 11:04:18.625000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.625000 audit: BPF prog-id=80 op=UNLOAD Dec 18 11:04:18.625000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.626000 audit: BPF prog-id=82 op=LOAD Dec 18 11:04:18.626000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2429 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:18.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131393137633230383631366531656530373431653565613033376564 Dec 18 11:04:18.643388 containerd[1520]: time="2025-12-18T11:04:18.643326063Z" level=info msg="StartContainer for \"61d71a8f0d1de2594ef3b16b26768c64de42023b58bca96c1e3249bc461cba32\" returns successfully" Dec 18 11:04:18.645994 kubelet[2323]: I1218 11:04:18.645488 2323 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 18 11:04:18.646431 kubelet[2323]: E1218 11:04:18.646317 2323 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 18 11:04:18.663081 containerd[1520]: time="2025-12-18T11:04:18.662852303Z" level=info msg="StartContainer for \"d4564bf76e26d5838eb682c111ed7df14b13e556bb598116b338977f18e56c7f\" returns successfully" Dec 18 11:04:18.663905 containerd[1520]: time="2025-12-18T11:04:18.662942863Z" level=info msg="StartContainer for \"a1917c208616e1ee0741e5ea037ed55dcff03639be4633d9874fdadf57272b8f\" returns successfully" Dec 18 11:04:18.815009 kubelet[2323]: E1218 11:04:18.814891 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.816074 kubelet[2323]: E1218 11:04:18.816038 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.819428 kubelet[2323]: E1218 11:04:18.819406 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.819548 kubelet[2323]: E1218 11:04:18.819529 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:18.821458 kubelet[2323]: E1218 11:04:18.821298 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:18.821458 kubelet[2323]: E1218 11:04:18.821409 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:19.448370 kubelet[2323]: I1218 11:04:19.448341 2323 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 18 11:04:19.822795 kubelet[2323]: E1218 11:04:19.822568 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:19.823501 kubelet[2323]: E1218 11:04:19.822968 2323 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 18 11:04:19.823501 kubelet[2323]: E1218 11:04:19.823344 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:19.823501 kubelet[2323]: E1218 11:04:19.823464 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:20.940831 kubelet[2323]: E1218 11:04:20.940789 2323 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 18 11:04:21.007425 kubelet[2323]: E1218 11:04:21.007148 2323 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18824a7218be85ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-18 11:04:17.784522223 +0000 UTC m=+1.147623281,LastTimestamp:2025-12-18 11:04:17.784522223 +0000 UTC m=+1.147623281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 18 11:04:21.066694 kubelet[2323]: E1218 11:04:21.066570 2323 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18824a72193d8057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-18 11:04:17.792843863 +0000 UTC m=+1.155944921,LastTimestamp:2025-12-18 11:04:17.792843863 +0000 UTC m=+1.155944921,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 18 11:04:21.071924 kubelet[2323]: I1218 11:04:21.070618 2323 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 18 11:04:21.071924 kubelet[2323]: E1218 11:04:21.071772 2323 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 18 11:04:21.092089 kubelet[2323]: I1218 11:04:21.092055 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:21.097733 kubelet[2323]: E1218 11:04:21.097678 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:21.097733 kubelet[2323]: I1218 11:04:21.097710 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:21.100057 kubelet[2323]: E1218 11:04:21.099929 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:21.100057 kubelet[2323]: I1218 11:04:21.099955 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 18 11:04:21.102033 kubelet[2323]: E1218 11:04:21.102005 2323 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 18 11:04:21.782328 kubelet[2323]: I1218 11:04:21.782078 2323 apiserver.go:52] "Watching apiserver" Dec 18 11:04:21.792586 kubelet[2323]: I1218 11:04:21.792551 2323 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 18 11:04:23.070139 systemd[1]: Reload requested from client PID 2599 ('systemctl') (unit session-8.scope)... Dec 18 11:04:23.070155 systemd[1]: Reloading... Dec 18 11:04:23.145764 zram_generator::config[2653]: No configuration found. Dec 18 11:04:23.300915 kubelet[2323]: I1218 11:04:23.300887 2323 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:23.306764 kubelet[2323]: E1218 11:04:23.306710 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:23.357571 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Dec 18 11:04:23.441290 systemd[1]: Reloading finished in 370 ms. Dec 18 11:04:23.453000 audit: BPF prog-id=83 op=LOAD Dec 18 11:04:23.456165 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 18 11:04:23.456200 kernel: audit: type=1334 audit(1766055863.453:328): prog-id=83 op=LOAD Dec 18 11:04:23.453000 audit: BPF prog-id=33 op=UNLOAD Dec 18 11:04:23.457206 kernel: audit: type=1334 audit(1766055863.453:329): prog-id=33 op=UNLOAD Dec 18 11:04:23.454000 audit: BPF prog-id=84 op=LOAD Dec 18 11:04:23.458088 kernel: audit: type=1334 audit(1766055863.454:330): prog-id=84 op=LOAD Dec 18 11:04:23.454000 audit: BPF prog-id=54 op=UNLOAD Dec 18 11:04:23.459018 kernel: audit: type=1334 audit(1766055863.454:331): prog-id=54 op=UNLOAD Dec 18 11:04:23.455000 audit: BPF prog-id=85 op=LOAD Dec 18 11:04:23.459889 kernel: audit: type=1334 audit(1766055863.455:332): prog-id=85 op=LOAD Dec 18 11:04:23.455000 audit: BPF prog-id=73 op=UNLOAD Dec 18 11:04:23.460764 kernel: audit: type=1334 audit(1766055863.455:333): prog-id=73 op=UNLOAD Dec 18 11:04:23.456000 audit: BPF prog-id=86 op=LOAD Dec 18 11:04:23.456000 audit: BPF prog-id=34 op=UNLOAD Dec 18 11:04:23.462434 kernel: audit: type=1334 audit(1766055863.456:334): prog-id=86 op=LOAD Dec 18 11:04:23.462515 kernel: audit: type=1334 audit(1766055863.456:335): prog-id=34 op=UNLOAD Dec 18 11:04:23.457000 audit: BPF prog-id=87 op=LOAD Dec 18 11:04:23.463369 kernel: audit: type=1334 audit(1766055863.457:336): prog-id=87 op=LOAD Dec 18 11:04:23.457000 audit: BPF prog-id=35 op=UNLOAD Dec 18 11:04:23.464285 kernel: audit: type=1334 audit(1766055863.457:337): prog-id=35 op=UNLOAD Dec 18 11:04:23.457000 audit: BPF prog-id=88 op=LOAD Dec 18 11:04:23.457000 audit: BPF prog-id=89 op=LOAD Dec 18 11:04:23.457000 audit: BPF prog-id=36 op=UNLOAD Dec 18 11:04:23.457000 audit: BPF prog-id=37 op=UNLOAD Dec 18 11:04:23.458000 audit: BPF prog-id=90 op=LOAD Dec 18 11:04:23.458000 audit: BPF prog-id=38 op=UNLOAD Dec 18 11:04:23.460000 audit: BPF prog-id=91 op=LOAD Dec 18 11:04:23.460000 audit: BPF prog-id=39 op=UNLOAD Dec 18 11:04:23.460000 audit: BPF prog-id=92 op=LOAD Dec 18 11:04:23.460000 audit: BPF prog-id=93 op=LOAD Dec 18 11:04:23.460000 audit: BPF prog-id=40 op=UNLOAD Dec 18 11:04:23.460000 audit: BPF prog-id=41 op=UNLOAD Dec 18 11:04:23.461000 audit: BPF prog-id=94 op=LOAD Dec 18 11:04:23.461000 audit: BPF prog-id=42 op=UNLOAD Dec 18 11:04:23.461000 audit: BPF prog-id=95 op=LOAD Dec 18 11:04:23.461000 audit: BPF prog-id=96 op=LOAD Dec 18 11:04:23.461000 audit: BPF prog-id=43 op=UNLOAD Dec 18 11:04:23.461000 audit: BPF prog-id=44 op=UNLOAD Dec 18 11:04:23.462000 audit: BPF prog-id=97 op=LOAD Dec 18 11:04:23.462000 audit: BPF prog-id=53 op=UNLOAD Dec 18 11:04:23.464000 audit: BPF prog-id=98 op=LOAD Dec 18 11:04:23.464000 audit: BPF prog-id=45 op=UNLOAD Dec 18 11:04:23.464000 audit: BPF prog-id=99 op=LOAD Dec 18 11:04:23.464000 audit: BPF prog-id=100 op=LOAD Dec 18 11:04:23.464000 audit: BPF prog-id=46 op=UNLOAD Dec 18 11:04:23.464000 audit: BPF prog-id=47 op=UNLOAD Dec 18 11:04:23.464000 audit: BPF prog-id=101 op=LOAD Dec 18 11:04:23.464000 audit: BPF prog-id=78 op=UNLOAD Dec 18 11:04:23.465000 audit: BPF prog-id=102 op=LOAD Dec 18 11:04:23.465000 audit: BPF prog-id=68 op=UNLOAD Dec 18 11:04:23.465000 audit: BPF prog-id=103 op=LOAD Dec 18 11:04:23.465000 audit: BPF prog-id=104 op=LOAD Dec 18 11:04:23.465000 audit: BPF prog-id=48 op=UNLOAD Dec 18 11:04:23.465000 audit: BPF prog-id=49 op=UNLOAD Dec 18 11:04:23.468000 audit: BPF prog-id=105 op=LOAD Dec 18 11:04:23.468000 audit: BPF prog-id=50 op=UNLOAD Dec 18 11:04:23.468000 audit: BPF prog-id=106 op=LOAD Dec 18 11:04:23.468000 audit: BPF prog-id=107 op=LOAD Dec 18 11:04:23.468000 audit: BPF prog-id=51 op=UNLOAD Dec 18 11:04:23.468000 audit: BPF prog-id=52 op=UNLOAD Dec 18 11:04:23.469000 audit: BPF prog-id=108 op=LOAD Dec 18 11:04:23.469000 audit: BPF prog-id=63 op=UNLOAD Dec 18 11:04:23.485833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:04:23.498623 systemd[1]: kubelet.service: Deactivated successfully. Dec 18 11:04:23.499808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:23.499881 systemd[1]: kubelet.service: Consumed 1.572s CPU time, 127.1M memory peak. Dec 18 11:04:23.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:23.501630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 18 11:04:23.648344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 18 11:04:23.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:23.661022 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 18 11:04:23.800309 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 18 11:04:23.800309 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 18 11:04:23.800309 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 18 11:04:23.800664 kubelet[2695]: I1218 11:04:23.800358 2695 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 18 11:04:23.806164 kubelet[2695]: I1218 11:04:23.806123 2695 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 18 11:04:23.806164 kubelet[2695]: I1218 11:04:23.806154 2695 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 18 11:04:23.806411 kubelet[2695]: I1218 11:04:23.806381 2695 server.go:954] "Client rotation is on, will bootstrap in background" Dec 18 11:04:23.807543 kubelet[2695]: I1218 11:04:23.807526 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 18 11:04:23.811414 kubelet[2695]: I1218 11:04:23.811381 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 18 11:04:23.816232 kubelet[2695]: I1218 11:04:23.816199 2695 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 18 11:04:23.819081 kubelet[2695]: I1218 11:04:23.819050 2695 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 18 11:04:23.819281 kubelet[2695]: I1218 11:04:23.819255 2695 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 18 11:04:23.819426 kubelet[2695]: I1218 11:04:23.819280 2695 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 18 11:04:23.819500 kubelet[2695]: I1218 11:04:23.819436 2695 topology_manager.go:138] "Creating topology manager with none policy" Dec 18 11:04:23.819500 kubelet[2695]: I1218 11:04:23.819444 2695 container_manager_linux.go:304] "Creating device plugin manager" Dec 18 11:04:23.819500 kubelet[2695]: I1218 11:04:23.819489 2695 state_mem.go:36] "Initialized new in-memory state store" Dec 18 11:04:23.819616 kubelet[2695]: I1218 11:04:23.819605 2695 kubelet.go:446] "Attempting to sync node with API server" Dec 18 11:04:23.819643 kubelet[2695]: I1218 11:04:23.819619 2695 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 18 11:04:23.819643 kubelet[2695]: I1218 11:04:23.819639 2695 kubelet.go:352] "Adding apiserver pod source" Dec 18 11:04:23.819688 kubelet[2695]: I1218 11:04:23.819653 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 18 11:04:23.820407 kubelet[2695]: I1218 11:04:23.820381 2695 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 18 11:04:23.820831 kubelet[2695]: I1218 11:04:23.820814 2695 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 18 11:04:23.821191 kubelet[2695]: I1218 11:04:23.821171 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 18 11:04:23.821259 kubelet[2695]: I1218 11:04:23.821207 2695 server.go:1287] "Started kubelet" Dec 18 11:04:23.822175 kubelet[2695]: I1218 11:04:23.822154 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 18 11:04:23.823407 kubelet[2695]: I1218 11:04:23.823267 2695 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 18 11:04:23.823783 kubelet[2695]: E1218 11:04:23.823620 2695 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 18 11:04:23.823783 kubelet[2695]: I1218 11:04:23.823660 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 18 11:04:23.824074 kubelet[2695]: I1218 11:04:23.824047 2695 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 18 11:04:23.824197 kubelet[2695]: I1218 11:04:23.824181 2695 reconciler.go:26] "Reconciler: start to sync state" Dec 18 11:04:23.824945 kubelet[2695]: I1218 11:04:23.824921 2695 factory.go:221] Registration of the systemd container factory successfully Dec 18 11:04:23.825026 kubelet[2695]: I1218 11:04:23.825004 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 18 11:04:23.825702 kubelet[2695]: I1218 11:04:23.825685 2695 server.go:479] "Adding debug handlers to kubelet server" Dec 18 11:04:23.826750 kubelet[2695]: I1218 11:04:23.826688 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 18 11:04:23.827935 kubelet[2695]: I1218 11:04:23.827913 2695 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 18 11:04:23.828098 kubelet[2695]: I1218 11:04:23.828078 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 18 11:04:23.829926 kubelet[2695]: E1218 11:04:23.829902 2695 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 18 11:04:23.848968 kubelet[2695]: I1218 11:04:23.848804 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 18 11:04:23.852757 kubelet[2695]: I1218 11:04:23.851926 2695 factory.go:221] Registration of the containerd container factory successfully Dec 18 11:04:23.855143 kubelet[2695]: I1218 11:04:23.855093 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 18 11:04:23.855143 kubelet[2695]: I1218 11:04:23.855135 2695 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 18 11:04:23.855278 kubelet[2695]: I1218 11:04:23.855160 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 18 11:04:23.855278 kubelet[2695]: I1218 11:04:23.855176 2695 kubelet.go:2382] "Starting kubelet main sync loop" Dec 18 11:04:23.855278 kubelet[2695]: E1218 11:04:23.855236 2695 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901115 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901137 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901159 2695 state_mem.go:36] "Initialized new in-memory state store" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901357 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901369 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901387 2695 policy_none.go:49] "None policy: Start" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901396 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 18 11:04:23.902050 kubelet[2695]: I1218 11:04:23.901404 2695 state_mem.go:35] "Initializing new in-memory state store" Dec 18 11:04:23.902287 kubelet[2695]: I1218 11:04:23.902150 2695 state_mem.go:75] "Updated machine memory state" Dec 18 11:04:23.906564 kubelet[2695]: I1218 11:04:23.906530 2695 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 18 11:04:23.906724 kubelet[2695]: I1218 11:04:23.906692 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 18 11:04:23.906773 kubelet[2695]: I1218 11:04:23.906709 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 18 11:04:23.906955 kubelet[2695]: I1218 11:04:23.906919 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 18 11:04:23.907884 kubelet[2695]: E1218 11:04:23.907865 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 18 11:04:23.956643 kubelet[2695]: I1218 11:04:23.956595 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:23.956643 kubelet[2695]: I1218 11:04:23.956634 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:23.957558 kubelet[2695]: I1218 11:04:23.956989 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 18 11:04:23.962444 kubelet[2695]: E1218 11:04:23.962418 2695 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.011179 kubelet[2695]: I1218 11:04:24.011152 2695 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 18 11:04:24.017695 kubelet[2695]: I1218 11:04:24.017504 2695 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 18 11:04:24.017695 kubelet[2695]: I1218 11:04:24.017588 2695 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 18 11:04:24.024951 kubelet[2695]: I1218 11:04:24.024905 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:24.024951 kubelet[2695]: I1218 11:04:24.024947 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:24.025090 kubelet[2695]: I1218 11:04:24.024969 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:24.025090 kubelet[2695]: I1218 11:04:24.024986 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:24.025090 kubelet[2695]: I1218 11:04:24.025002 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.025090 kubelet[2695]: I1218 11:04:24.025017 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.025090 kubelet[2695]: I1218 11:04:24.025034 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Dec 18 11:04:24.025186 kubelet[2695]: I1218 11:04:24.025048 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Dec 18 11:04:24.025186 kubelet[2695]: I1218 11:04:24.025063 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a9d528305958a399919a7f1a8d9296b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5a9d528305958a399919a7f1a8d9296b\") " pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.261814 kubelet[2695]: E1218 11:04:24.261491 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.262636 kubelet[2695]: E1218 11:04:24.262524 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.262944 kubelet[2695]: E1218 11:04:24.262919 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.820926 kubelet[2695]: I1218 11:04:24.820874 2695 apiserver.go:52] "Watching apiserver" Dec 18 11:04:24.824583 kubelet[2695]: I1218 11:04:24.824540 2695 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 18 11:04:24.880844 kubelet[2695]: E1218 11:04:24.880803 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.881599 kubelet[2695]: E1218 11:04:24.881563 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.881599 kubelet[2695]: I1218 11:04:24.881596 2695 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.887271 kubelet[2695]: E1218 11:04:24.887228 2695 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 18 11:04:24.888038 kubelet[2695]: E1218 11:04:24.887384 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:24.906050 kubelet[2695]: I1218 11:04:24.905989 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.905973023 podStartE2EDuration="1.905973023s" podCreationTimestamp="2025-12-18 11:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:04:24.898919343 +0000 UTC m=+1.131441161" watchObservedRunningTime="2025-12-18 11:04:24.905973023 +0000 UTC m=+1.138494841" Dec 18 11:04:24.906174 kubelet[2695]: I1218 11:04:24.906104 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.906099623 podStartE2EDuration="1.906099623s" podCreationTimestamp="2025-12-18 11:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:04:24.905156463 +0000 UTC m=+1.137678321" watchObservedRunningTime="2025-12-18 11:04:24.906099623 +0000 UTC m=+1.138621481" Dec 18 11:04:24.920935 kubelet[2695]: I1218 11:04:24.920704 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.920689983 podStartE2EDuration="1.920689983s" podCreationTimestamp="2025-12-18 11:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:04:24.912987103 +0000 UTC m=+1.145508961" watchObservedRunningTime="2025-12-18 11:04:24.920689983 +0000 UTC m=+1.153211841" Dec 18 11:04:25.881262 kubelet[2695]: E1218 11:04:25.880953 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:25.881262 kubelet[2695]: E1218 11:04:25.880953 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:27.461247 kubelet[2695]: E1218 11:04:27.461207 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:29.252890 kubelet[2695]: I1218 11:04:29.252863 2695 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 18 11:04:29.253529 containerd[1520]: time="2025-12-18T11:04:29.253155445Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 18 11:04:29.254391 kubelet[2695]: I1218 11:04:29.253863 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 18 11:04:30.300682 systemd[1]: Created slice kubepods-besteffort-pod216abc4e_c3b5_4344_ab5b_67b0ea586e8d.slice - libcontainer container kubepods-besteffort-pod216abc4e_c3b5_4344_ab5b_67b0ea586e8d.slice. Dec 18 11:04:30.363446 kubelet[2695]: I1218 11:04:30.362559 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/216abc4e-c3b5-4344-ab5b-67b0ea586e8d-lib-modules\") pod \"kube-proxy-nhz4z\" (UID: \"216abc4e-c3b5-4344-ab5b-67b0ea586e8d\") " pod="kube-system/kube-proxy-nhz4z" Dec 18 11:04:30.364245 kubelet[2695]: I1218 11:04:30.363932 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2p4c\" (UniqueName: \"kubernetes.io/projected/216abc4e-c3b5-4344-ab5b-67b0ea586e8d-kube-api-access-v2p4c\") pod \"kube-proxy-nhz4z\" (UID: \"216abc4e-c3b5-4344-ab5b-67b0ea586e8d\") " pod="kube-system/kube-proxy-nhz4z" Dec 18 11:04:30.364245 kubelet[2695]: I1218 11:04:30.364137 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/216abc4e-c3b5-4344-ab5b-67b0ea586e8d-kube-proxy\") pod \"kube-proxy-nhz4z\" (UID: \"216abc4e-c3b5-4344-ab5b-67b0ea586e8d\") " pod="kube-system/kube-proxy-nhz4z" Dec 18 11:04:30.364245 kubelet[2695]: I1218 11:04:30.364164 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/216abc4e-c3b5-4344-ab5b-67b0ea586e8d-xtables-lock\") pod \"kube-proxy-nhz4z\" (UID: \"216abc4e-c3b5-4344-ab5b-67b0ea586e8d\") " pod="kube-system/kube-proxy-nhz4z" Dec 18 11:04:30.372531 systemd[1]: Created slice kubepods-besteffort-podd858b2af_b8a8_4753_b8b6_079f8612da5e.slice - libcontainer container kubepods-besteffort-podd858b2af_b8a8_4753_b8b6_079f8612da5e.slice. Dec 18 11:04:30.464633 kubelet[2695]: I1218 11:04:30.464570 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d858b2af-b8a8-4753-b8b6-079f8612da5e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sz267\" (UID: \"d858b2af-b8a8-4753-b8b6-079f8612da5e\") " pod="tigera-operator/tigera-operator-7dcd859c48-sz267" Dec 18 11:04:30.464633 kubelet[2695]: I1218 11:04:30.464641 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vctv\" (UniqueName: \"kubernetes.io/projected/d858b2af-b8a8-4753-b8b6-079f8612da5e-kube-api-access-8vctv\") pod \"tigera-operator-7dcd859c48-sz267\" (UID: \"d858b2af-b8a8-4753-b8b6-079f8612da5e\") " pod="tigera-operator/tigera-operator-7dcd859c48-sz267" Dec 18 11:04:30.479462 kubelet[2695]: E1218 11:04:30.479332 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:30.615133 kubelet[2695]: E1218 11:04:30.615015 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:30.616496 containerd[1520]: time="2025-12-18T11:04:30.615846365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhz4z,Uid:216abc4e-c3b5-4344-ab5b-67b0ea586e8d,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:30.642313 containerd[1520]: time="2025-12-18T11:04:30.642258809Z" level=info msg="connecting to shim 050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc" address="unix:///run/containerd/s/9ea7d110f39a699b8f026ceae7caa7adbe7c25689ce4e04da0a9e992dade563c" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:30.675744 containerd[1520]: time="2025-12-18T11:04:30.675684834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sz267,Uid:d858b2af-b8a8-4753-b8b6-079f8612da5e,Namespace:tigera-operator,Attempt:0,}" Dec 18 11:04:30.679932 systemd[1]: Started cri-containerd-050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc.scope - libcontainer container 050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc. Dec 18 11:04:30.693000 audit: BPF prog-id=109 op=LOAD Dec 18 11:04:30.695131 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 18 11:04:30.695191 kernel: audit: type=1334 audit(1766055870.693:382): prog-id=109 op=LOAD Dec 18 11:04:30.694000 audit: BPF prog-id=110 op=LOAD Dec 18 11:04:30.694000 audit[2767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.699983 kernel: audit: type=1334 audit(1766055870.694:383): prog-id=110 op=LOAD Dec 18 11:04:30.700051 kernel: audit: type=1300 audit(1766055870.694:383): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.703403 kernel: audit: type=1327 audit(1766055870.694:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.703934 containerd[1520]: time="2025-12-18T11:04:30.703861882Z" level=info msg="connecting to shim c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741" address="unix:///run/containerd/s/6e637bfe04fe66147da8dd4c9ed7452ebdbe65f6c1b93bb5c52091207526410d" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:30.706000 audit: BPF prog-id=110 op=UNLOAD Dec 18 11:04:30.706000 audit[2767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.710941 kernel: audit: type=1334 audit(1766055870.706:384): prog-id=110 op=UNLOAD Dec 18 11:04:30.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.711075 kernel: audit: type=1300 audit(1766055870.706:384): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.711135 kernel: audit: type=1327 audit(1766055870.706:384): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.706000 audit: BPF prog-id=111 op=LOAD Dec 18 11:04:30.715027 kernel: audit: type=1334 audit(1766055870.706:385): prog-id=111 op=LOAD Dec 18 11:04:30.706000 audit[2767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.715229 kernel: audit: type=1300 audit(1766055870.706:385): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.718455 kernel: audit: type=1327 audit(1766055870.706:385): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.707000 audit: BPF prog-id=112 op=LOAD Dec 18 11:04:30.707000 audit[2767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.713000 audit: BPF prog-id=112 op=UNLOAD Dec 18 11:04:30.713000 audit[2767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.713000 audit: BPF prog-id=111 op=UNLOAD Dec 18 11:04:30.713000 audit[2767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.713000 audit: BPF prog-id=113 op=LOAD Dec 18 11:04:30.713000 audit[2767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2753 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306531353939363831373966323334323362363739373138343163 Dec 18 11:04:30.736930 systemd[1]: Started cri-containerd-c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741.scope - libcontainer container c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741. Dec 18 11:04:30.741284 containerd[1520]: time="2025-12-18T11:04:30.740779918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhz4z,Uid:216abc4e-c3b5-4344-ab5b-67b0ea586e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc\"" Dec 18 11:04:30.742123 kubelet[2695]: E1218 11:04:30.742093 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:30.746666 containerd[1520]: time="2025-12-18T11:04:30.746623217Z" level=info msg="CreateContainer within sandbox \"050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 18 11:04:30.753000 audit: BPF prog-id=114 op=LOAD Dec 18 11:04:30.754000 audit: BPF prog-id=115 op=LOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=115 op=UNLOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=116 op=LOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=117 op=LOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=117 op=UNLOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=116 op=UNLOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.754000 audit: BPF prog-id=118 op=LOAD Dec 18 11:04:30.754000 audit[2804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2794 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306537613738363966363163336333383735303061363636613464 Dec 18 11:04:30.756770 containerd[1520]: time="2025-12-18T11:04:30.756703848Z" level=info msg="Container c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:30.772151 containerd[1520]: time="2025-12-18T11:04:30.772068217Z" level=info msg="CreateContainer within sandbox \"050e159968179f23423b67971841c6225d0b2753b7a8f4fd9cf7a7938abb65dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165\"" Dec 18 11:04:30.773511 containerd[1520]: time="2025-12-18T11:04:30.772930659Z" level=info msg="StartContainer for \"c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165\"" Dec 18 11:04:30.775510 containerd[1520]: time="2025-12-18T11:04:30.775469427Z" level=info msg="connecting to shim c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165" address="unix:///run/containerd/s/9ea7d110f39a699b8f026ceae7caa7adbe7c25689ce4e04da0a9e992dade563c" protocol=ttrpc version=3 Dec 18 11:04:30.783328 containerd[1520]: time="2025-12-18T11:04:30.783279772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sz267,Uid:d858b2af-b8a8-4753-b8b6-079f8612da5e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741\"" Dec 18 11:04:30.785786 containerd[1520]: time="2025-12-18T11:04:30.785668819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 18 11:04:30.815033 systemd[1]: Started cri-containerd-c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165.scope - libcontainer container c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165. Dec 18 11:04:30.875000 audit: BPF prog-id=119 op=LOAD Dec 18 11:04:30.875000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2753 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353332373765383761323139653062333434323665326538303936 Dec 18 11:04:30.875000 audit: BPF prog-id=120 op=LOAD Dec 18 11:04:30.875000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2753 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353332373765383761323139653062333434323665326538303936 Dec 18 11:04:30.875000 audit: BPF prog-id=120 op=UNLOAD Dec 18 11:04:30.875000 audit[2836]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353332373765383761323139653062333434323665326538303936 Dec 18 11:04:30.875000 audit: BPF prog-id=119 op=UNLOAD Dec 18 11:04:30.875000 audit[2836]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353332373765383761323139653062333434323665326538303936 Dec 18 11:04:30.875000 audit: BPF prog-id=121 op=LOAD Dec 18 11:04:30.875000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2753 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:30.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353332373765383761323139653062333434323665326538303936 Dec 18 11:04:30.891906 kubelet[2695]: E1218 11:04:30.891876 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:30.915508 containerd[1520]: time="2025-12-18T11:04:30.915472307Z" level=info msg="StartContainer for \"c953277e87a219e0b34426e2e8096e405420a572b726a0c9b651628b0ff5a165\" returns successfully" Dec 18 11:04:31.052000 audit[2902]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.052000 audit[2902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe395ee0 a2=0 a3=1 items=0 ppid=2850 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 18 11:04:31.053000 audit[2903]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.053000 audit[2903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffddb0e3c0 a2=0 a3=1 items=0 ppid=2850 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 18 11:04:31.053000 audit[2904]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.053000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8baf140 a2=0 a3=1 items=0 ppid=2850 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 18 11:04:31.055000 audit[2905]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.055000 audit[2905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef9b31c0 a2=0 a3=1 items=0 ppid=2850 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 18 11:04:31.055000 audit[2906]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.055000 audit[2906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6430000 a2=0 a3=1 items=0 ppid=2850 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 18 11:04:31.057000 audit[2907]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.057000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff123200 a2=0 a3=1 items=0 ppid=2850 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 18 11:04:31.156000 audit[2909]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.156000 audit[2909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd371bd50 a2=0 a3=1 items=0 ppid=2850 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 18 11:04:31.159000 audit[2911]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.159000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc4b78660 a2=0 a3=1 items=0 ppid=2850 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 18 11:04:31.163000 audit[2914]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.163000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffddcbdd20 a2=0 a3=1 items=0 ppid=2850 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 18 11:04:31.164000 audit[2915]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.164000 audit[2915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff41110c0 a2=0 a3=1 items=0 ppid=2850 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 18 11:04:31.167000 audit[2917]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.167000 audit[2917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe87c32d0 a2=0 a3=1 items=0 ppid=2850 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 18 11:04:31.168000 audit[2918]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.168000 audit[2918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8641bd0 a2=0 a3=1 items=0 ppid=2850 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 18 11:04:31.170000 audit[2920]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.170000 audit[2920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff56a0ce0 a2=0 a3=1 items=0 ppid=2850 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 18 11:04:31.174000 audit[2923]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.174000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd26168d0 a2=0 a3=1 items=0 ppid=2850 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 18 11:04:31.175000 audit[2924]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.175000 audit[2924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda42c4a0 a2=0 a3=1 items=0 ppid=2850 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 18 11:04:31.177000 audit[2926]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.177000 audit[2926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd96e83f0 a2=0 a3=1 items=0 ppid=2850 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 18 11:04:31.179000 audit[2927]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.179000 audit[2927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6147c80 a2=0 a3=1 items=0 ppid=2850 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 18 11:04:31.181000 audit[2929]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.181000 audit[2929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff6e7fcd0 a2=0 a3=1 items=0 ppid=2850 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 18 11:04:31.185000 audit[2932]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.185000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9d1b870 a2=0 a3=1 items=0 ppid=2850 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 18 11:04:31.188000 audit[2935]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.188000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed093f80 a2=0 a3=1 items=0 ppid=2850 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 18 11:04:31.189000 audit[2936]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.189000 audit[2936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3234b70 a2=0 a3=1 items=0 ppid=2850 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 18 11:04:31.192000 audit[2938]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.192000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd2b7ef30 a2=0 a3=1 items=0 ppid=2850 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 18 11:04:31.195000 audit[2941]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.195000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff245ef30 a2=0 a3=1 items=0 ppid=2850 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 18 11:04:31.196000 audit[2942]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.196000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9ce7180 a2=0 a3=1 items=0 ppid=2850 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 18 11:04:31.199000 audit[2944]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 18 11:04:31.199000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffff823a50 a2=0 a3=1 items=0 ppid=2850 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 18 11:04:31.220000 audit[2950]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:31.220000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff3e586e0 a2=0 a3=1 items=0 ppid=2850 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:31.233000 audit[2950]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:31.233000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff3e586e0 a2=0 a3=1 items=0 ppid=2850 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:31.234000 audit[2955]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.234000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdaa62d80 a2=0 a3=1 items=0 ppid=2850 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 18 11:04:31.237000 audit[2957]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.237000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd732f630 a2=0 a3=1 items=0 ppid=2850 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 18 11:04:31.242000 audit[2960]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.242000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc1417ce0 a2=0 a3=1 items=0 ppid=2850 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 18 11:04:31.243000 audit[2961]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.243000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd908c5a0 a2=0 a3=1 items=0 ppid=2850 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 18 11:04:31.246000 audit[2963]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.246000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb9ae960 a2=0 a3=1 items=0 ppid=2850 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 18 11:04:31.247000 audit[2964]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.247000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd412f460 a2=0 a3=1 items=0 ppid=2850 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 18 11:04:31.250000 audit[2966]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.250000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5f39900 a2=0 a3=1 items=0 ppid=2850 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 18 11:04:31.253000 audit[2969]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.253000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcca14b40 a2=0 a3=1 items=0 ppid=2850 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 18 11:04:31.254000 audit[2970]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.254000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5f84730 a2=0 a3=1 items=0 ppid=2850 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 18 11:04:31.257000 audit[2972]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.257000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0df3160 a2=0 a3=1 items=0 ppid=2850 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 18 11:04:31.258000 audit[2973]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.258000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee711f70 a2=0 a3=1 items=0 ppid=2850 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 18 11:04:31.262000 audit[2975]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.262000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe42807a0 a2=0 a3=1 items=0 ppid=2850 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 18 11:04:31.266000 audit[2978]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.266000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffedbe580 a2=0 a3=1 items=0 ppid=2850 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 18 11:04:31.270000 audit[2981]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.270000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf10b4b0 a2=0 a3=1 items=0 ppid=2850 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 18 11:04:31.272000 audit[2982]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.272000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3f785d0 a2=0 a3=1 items=0 ppid=2850 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 18 11:04:31.274000 audit[2984]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.274000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff4acd9a0 a2=0 a3=1 items=0 ppid=2850 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 18 11:04:31.278000 audit[2987]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.278000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea9c2870 a2=0 a3=1 items=0 ppid=2850 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 18 11:04:31.279000 audit[2988]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.279000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3279280 a2=0 a3=1 items=0 ppid=2850 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 18 11:04:31.281000 audit[2990]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.281000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffed414740 a2=0 a3=1 items=0 ppid=2850 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 18 11:04:31.282000 audit[2991]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.282000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc33a3840 a2=0 a3=1 items=0 ppid=2850 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 18 11:04:31.284000 audit[2993]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.284000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd58dff40 a2=0 a3=1 items=0 ppid=2850 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 18 11:04:31.288000 audit[2996]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 18 11:04:31.288000 audit[2996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffce8f1250 a2=0 a3=1 items=0 ppid=2850 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 18 11:04:31.291000 audit[2998]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 18 11:04:31.291000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffccbe9630 a2=0 a3=1 items=0 ppid=2850 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.291000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:31.291000 audit[2998]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 18 11:04:31.291000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffccbe9630 a2=0 a3=1 items=0 ppid=2850 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:31.291000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:31.897383 kubelet[2695]: E1218 11:04:31.897294 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:31.900957 kubelet[2695]: E1218 11:04:31.900874 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:32.278508 kubelet[2695]: E1218 11:04:32.278305 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:32.296942 kubelet[2695]: I1218 11:04:32.296740 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nhz4z" podStartSLOduration=2.296711983 podStartE2EDuration="2.296711983s" podCreationTimestamp="2025-12-18 11:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:04:31.909578213 +0000 UTC m=+8.142100071" watchObservedRunningTime="2025-12-18 11:04:32.296711983 +0000 UTC m=+8.529233841" Dec 18 11:04:32.314305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072209328.mount: Deactivated successfully. Dec 18 11:04:32.899098 kubelet[2695]: E1218 11:04:32.898916 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:32.899580 kubelet[2695]: E1218 11:04:32.899186 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:33.330672 containerd[1520]: time="2025-12-18T11:04:33.330628662Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:33.332025 containerd[1520]: time="2025-12-18T11:04:33.331982826Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 18 11:04:33.333071 containerd[1520]: time="2025-12-18T11:04:33.333028868Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:33.335347 containerd[1520]: time="2025-12-18T11:04:33.335303314Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:33.336375 containerd[1520]: time="2025-12-18T11:04:33.336236277Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.550530418s" Dec 18 11:04:33.336375 containerd[1520]: time="2025-12-18T11:04:33.336267557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 18 11:04:33.340599 containerd[1520]: time="2025-12-18T11:04:33.340559608Z" level=info msg="CreateContainer within sandbox \"c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 18 11:04:33.349222 containerd[1520]: time="2025-12-18T11:04:33.349176230Z" level=info msg="Container 886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:33.355645 containerd[1520]: time="2025-12-18T11:04:33.355598087Z" level=info msg="CreateContainer within sandbox \"c30e7a7869f61c3c387500a666a4d93daaa7cae82994177635c8e49a4cabf741\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775\"" Dec 18 11:04:33.356332 containerd[1520]: time="2025-12-18T11:04:33.356141248Z" level=info msg="StartContainer for \"886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775\"" Dec 18 11:04:33.357224 containerd[1520]: time="2025-12-18T11:04:33.357180211Z" level=info msg="connecting to shim 886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775" address="unix:///run/containerd/s/6e637bfe04fe66147da8dd4c9ed7452ebdbe65f6c1b93bb5c52091207526410d" protocol=ttrpc version=3 Dec 18 11:04:33.383920 systemd[1]: Started cri-containerd-886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775.scope - libcontainer container 886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775. Dec 18 11:04:33.393000 audit: BPF prog-id=122 op=LOAD Dec 18 11:04:33.393000 audit: BPF prog-id=123 op=LOAD Dec 18 11:04:33.393000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=123 op=UNLOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=124 op=LOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=125 op=LOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=125 op=UNLOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=124 op=UNLOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.394000 audit: BPF prog-id=126 op=LOAD Dec 18 11:04:33.394000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2794 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:33.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363934396265636131376534353261393961333966323137366563 Dec 18 11:04:33.417382 containerd[1520]: time="2025-12-18T11:04:33.417259047Z" level=info msg="StartContainer for \"886949beca17e452a99a39f2176ec24e9e6b8504d99b0690c92cd06b14dee775\" returns successfully" Dec 18 11:04:33.917380 kubelet[2695]: I1218 11:04:33.917321 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sz267" podStartSLOduration=1.362697073 podStartE2EDuration="3.917305302s" podCreationTimestamp="2025-12-18 11:04:30 +0000 UTC" firstStartedPulling="2025-12-18 11:04:30.784712816 +0000 UTC m=+7.017234634" lastFinishedPulling="2025-12-18 11:04:33.339321005 +0000 UTC m=+9.571842863" observedRunningTime="2025-12-18 11:04:33.917251901 +0000 UTC m=+10.149773759" watchObservedRunningTime="2025-12-18 11:04:33.917305302 +0000 UTC m=+10.149827120" Dec 18 11:04:37.473492 kubelet[2695]: E1218 11:04:37.473448 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:38.973017 sudo[1746]: pam_unix(sudo:session): session closed for user root Dec 18 11:04:38.972000 audit[1746]: AUDIT1106 pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.974345 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 18 11:04:38.974388 kernel: audit: type=1106 audit(1766055878.972:462): pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.973000 audit[1746]: AUDIT1104 pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.981375 kernel: audit: type=1104 audit(1766055878.973:463): pid=1746 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.983688 sshd[1745]: Connection closed by 10.0.0.1 port 60076 Dec 18 11:04:38.985542 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 18 11:04:38.985000 audit[1741]: AUDIT1106 pid=1741 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:38.985000 audit[1741]: AUDIT1104 pid=1741 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:38.991965 systemd[1]: sshd@6-4099-10.0.0.27:22-10.0.0.1:60076.service: Deactivated successfully. Dec 18 11:04:38.994066 systemd[1]: session-8.scope: Deactivated successfully. Dec 18 11:04:38.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-4099-10.0.0.27:22-10.0.0.1:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.994790 kernel: audit: type=1106 audit(1766055878.985:464): pid=1741 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:38.994334 systemd[1]: session-8.scope: Consumed 6.384s CPU time, 186.9M memory peak. Dec 18 11:04:38.994856 kernel: audit: type=1104 audit(1766055878.985:465): pid=1741 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:04:38.994869 kernel: audit: type=1131 audit(1766055878.991:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-4099-10.0.0.27:22-10.0.0.1:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:04:38.997628 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. Dec 18 11:04:38.999672 systemd-logind[1499]: Removed session 8. Dec 18 11:04:41.747000 audit[3099]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.747000 audit[3099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff528ff30 a2=0 a3=1 items=0 ppid=2850 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.753853 kernel: audit: type=1325 audit(1766055881.747:467): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.753911 kernel: audit: type=1300 audit(1766055881.747:467): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff528ff30 a2=0 a3=1 items=0 ppid=2850 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:41.762501 kernel: audit: type=1327 audit(1766055881.747:467): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:41.758000 audit[3099]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.766780 kernel: audit: type=1325 audit(1766055881.758:468): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.758000 audit[3099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff528ff30 a2=0 a3=1 items=0 ppid=2850 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.772683 kernel: audit: type=1300 audit(1766055881.758:468): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff528ff30 a2=0 a3=1 items=0 ppid=2850 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:41.789000 audit[3101]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.789000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffda1be920 a2=0 a3=1 items=0 ppid=2850 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:41.795000 audit[3101]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:41.795000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffda1be920 a2=0 a3=1 items=0 ppid=2850 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:41.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:42.055890 update_engine[1501]: I20251218 11:04:42.055757 1501 update_attempter.cc:509] Updating boot flags... Dec 18 11:04:45.412000 audit[3125]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.416226 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 18 11:04:45.416317 kernel: audit: type=1325 audit(1766055885.412:471): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.412000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd30750c0 a2=0 a3=1 items=0 ppid=2850 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.422430 kernel: audit: type=1300 audit(1766055885.412:471): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd30750c0 a2=0 a3=1 items=0 ppid=2850 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.425689 kernel: audit: type=1327 audit(1766055885.412:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.417000 audit[3125]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.428728 kernel: audit: type=1325 audit(1766055885.417:472): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.417000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd30750c0 a2=0 a3=1 items=0 ppid=2850 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.434464 kernel: audit: type=1300 audit(1766055885.417:472): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd30750c0 a2=0 a3=1 items=0 ppid=2850 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.439548 kernel: audit: type=1327 audit(1766055885.417:472): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.438000 audit[3127]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.442750 kernel: audit: type=1325 audit(1766055885.438:473): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.438000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc8292160 a2=0 a3=1 items=0 ppid=2850 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.448106 kernel: audit: type=1300 audit(1766055885.438:473): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc8292160 a2=0 a3=1 items=0 ppid=2850 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.450090 kernel: audit: type=1327 audit(1766055885.438:473): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:45.448000 audit[3127]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.451972 kernel: audit: type=1325 audit(1766055885.448:474): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:45.448000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8292160 a2=0 a3=1 items=0 ppid=2850 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:45.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:46.453000 audit[3129]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:46.453000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2d5e7b0 a2=0 a3=1 items=0 ppid=2850 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:46.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:46.458000 audit[3129]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:46.458000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2d5e7b0 a2=0 a3=1 items=0 ppid=2850 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:46.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:47.467000 audit[3131]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:47.467000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc718d270 a2=0 a3=1 items=0 ppid=2850 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:47.473000 audit[3131]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:47.473000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc718d270 a2=0 a3=1 items=0 ppid=2850 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:47.586260 systemd[1]: Created slice kubepods-besteffort-podd8384adc_9bcd_498f_925a_a4a73fe11213.slice - libcontainer container kubepods-besteffort-podd8384adc_9bcd_498f_925a_a4a73fe11213.slice. Dec 18 11:04:47.685887 kubelet[2695]: I1218 11:04:47.685830 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8384adc-9bcd-498f-925a-a4a73fe11213-tigera-ca-bundle\") pod \"calico-typha-85f49666d5-nrwc5\" (UID: \"d8384adc-9bcd-498f-925a-a4a73fe11213\") " pod="calico-system/calico-typha-85f49666d5-nrwc5" Dec 18 11:04:47.685887 kubelet[2695]: I1218 11:04:47.685881 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d8384adc-9bcd-498f-925a-a4a73fe11213-typha-certs\") pod \"calico-typha-85f49666d5-nrwc5\" (UID: \"d8384adc-9bcd-498f-925a-a4a73fe11213\") " pod="calico-system/calico-typha-85f49666d5-nrwc5" Dec 18 11:04:47.686312 kubelet[2695]: I1218 11:04:47.685903 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk47t\" (UniqueName: \"kubernetes.io/projected/d8384adc-9bcd-498f-925a-a4a73fe11213-kube-api-access-pk47t\") pod \"calico-typha-85f49666d5-nrwc5\" (UID: \"d8384adc-9bcd-498f-925a-a4a73fe11213\") " pod="calico-system/calico-typha-85f49666d5-nrwc5" Dec 18 11:04:47.768136 systemd[1]: Created slice kubepods-besteffort-pod05cfa3ae_36f8_4ba2_8612_234a3678df28.slice - libcontainer container kubepods-besteffort-pod05cfa3ae_36f8_4ba2_8612_234a3678df28.slice. Dec 18 11:04:47.786827 kubelet[2695]: I1218 11:04:47.786773 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-cni-bin-dir\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787185 kubelet[2695]: I1218 11:04:47.787129 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05cfa3ae-36f8-4ba2-8612-234a3678df28-node-certs\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787185 kubelet[2695]: I1218 11:04:47.787176 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cfa3ae-36f8-4ba2-8612-234a3678df28-tigera-ca-bundle\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787259 kubelet[2695]: I1218 11:04:47.787200 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-xtables-lock\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787259 kubelet[2695]: I1218 11:04:47.787222 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-flexvol-driver-host\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787259 kubelet[2695]: I1218 11:04:47.787241 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-policysync\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787325 kubelet[2695]: I1218 11:04:47.787259 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbrk\" (UniqueName: \"kubernetes.io/projected/05cfa3ae-36f8-4ba2-8612-234a3678df28-kube-api-access-2tbrk\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787325 kubelet[2695]: I1218 11:04:47.787302 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-cni-net-dir\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787325 kubelet[2695]: I1218 11:04:47.787319 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-lib-modules\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787383 kubelet[2695]: I1218 11:04:47.787351 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-var-run-calico\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787403 kubelet[2695]: I1218 11:04:47.787385 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-cni-log-dir\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.787424 kubelet[2695]: I1218 11:04:47.787406 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05cfa3ae-36f8-4ba2-8612-234a3678df28-var-lib-calico\") pod \"calico-node-x69c7\" (UID: \"05cfa3ae-36f8-4ba2-8612-234a3678df28\") " pod="calico-system/calico-node-x69c7" Dec 18 11:04:47.893988 kubelet[2695]: E1218 11:04:47.893950 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:47.894985 containerd[1520]: time="2025-12-18T11:04:47.894943957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f49666d5-nrwc5,Uid:d8384adc-9bcd-498f-925a-a4a73fe11213,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:47.904699 kubelet[2695]: E1218 11:04:47.903807 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.904699 kubelet[2695]: W1218 11:04:47.903829 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.906289 kubelet[2695]: E1218 11:04:47.906208 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.917300 containerd[1520]: time="2025-12-18T11:04:47.917262941Z" level=info msg="connecting to shim b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6" address="unix:///run/containerd/s/b0e467b03a85c4616ad6c983b20d4a958c8e93d622d817b0ac7f3c8a5db34e78" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:47.948970 systemd[1]: Started cri-containerd-b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6.scope - libcontainer container b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6. Dec 18 11:04:47.957809 kubelet[2695]: E1218 11:04:47.957763 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:04:47.966192 kubelet[2695]: E1218 11:04:47.966147 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.966192 kubelet[2695]: W1218 11:04:47.966174 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.966192 kubelet[2695]: E1218 11:04:47.966194 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.966554 kubelet[2695]: E1218 11:04:47.966375 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.966554 kubelet[2695]: W1218 11:04:47.966384 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.966554 kubelet[2695]: E1218 11:04:47.966421 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.967804 kubelet[2695]: E1218 11:04:47.967780 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.967804 kubelet[2695]: W1218 11:04:47.967796 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.967804 kubelet[2695]: E1218 11:04:47.967810 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.968858 kubelet[2695]: E1218 11:04:47.968725 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.968858 kubelet[2695]: W1218 11:04:47.968742 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.968858 kubelet[2695]: E1218 11:04:47.968755 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.969821 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970387 kubelet[2695]: W1218 11:04:47.969839 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.969852 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.970008 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970387 kubelet[2695]: W1218 11:04:47.970015 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.970024 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.970142 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970387 kubelet[2695]: W1218 11:04:47.970149 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.970156 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970387 kubelet[2695]: E1218 11:04:47.970325 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970627 kubelet[2695]: W1218 11:04:47.970334 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970627 kubelet[2695]: E1218 11:04:47.970343 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970627 kubelet[2695]: E1218 11:04:47.970478 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970627 kubelet[2695]: W1218 11:04:47.970485 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970627 kubelet[2695]: E1218 11:04:47.970493 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970627 kubelet[2695]: E1218 11:04:47.970605 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970627 kubelet[2695]: W1218 11:04:47.970612 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970627 kubelet[2695]: E1218 11:04:47.970618 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970806 kubelet[2695]: E1218 11:04:47.970752 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970806 kubelet[2695]: W1218 11:04:47.970760 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.970806 kubelet[2695]: E1218 11:04:47.970767 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970000 audit: BPF prog-id=127 op=LOAD Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.970882 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.971457 kubelet[2695]: W1218 11:04:47.970890 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.970897 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.971024 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.971457 kubelet[2695]: W1218 11:04:47.971032 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.971041 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.971184 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.971457 kubelet[2695]: W1218 11:04:47.971195 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.971203 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.971457 kubelet[2695]: E1218 11:04:47.971332 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.970000 audit: BPF prog-id=128 op=LOAD Dec 18 11:04:47.970000 audit[3157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971956 kubelet[2695]: W1218 11:04:47.971340 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.971956 kubelet[2695]: E1218 11:04:47.971347 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.971956 kubelet[2695]: E1218 11:04:47.971499 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.971956 kubelet[2695]: W1218 11:04:47.971506 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.971956 kubelet[2695]: E1218 11:04:47.971513 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.972248 kubelet[2695]: E1218 11:04:47.972219 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.972248 kubelet[2695]: W1218 11:04:47.972247 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.972302 kubelet[2695]: E1218 11:04:47.972260 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.971000 audit: BPF prog-id=128 op=UNLOAD Dec 18 11:04:47.971000 audit[3157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.971000 audit: BPF prog-id=129 op=LOAD Dec 18 11:04:47.971000 audit[3157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.971000 audit: BPF prog-id=130 op=LOAD Dec 18 11:04:47.971000 audit[3157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.971000 audit: BPF prog-id=130 op=UNLOAD Dec 18 11:04:47.971000 audit[3157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.971000 audit: BPF prog-id=129 op=UNLOAD Dec 18 11:04:47.971000 audit[3157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.972000 audit: BPF prog-id=131 op=LOAD Dec 18 11:04:47.972000 audit[3157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3145 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:47.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239316434326634313963313235366437653266343638333238626534 Dec 18 11:04:47.974021 kubelet[2695]: E1218 11:04:47.973410 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.974021 kubelet[2695]: W1218 11:04:47.973515 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.974021 kubelet[2695]: E1218 11:04:47.973528 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.974021 kubelet[2695]: E1218 11:04:47.973730 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.974021 kubelet[2695]: W1218 11:04:47.973741 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.974021 kubelet[2695]: E1218 11:04:47.973750 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.974354 kubelet[2695]: E1218 11:04:47.974329 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.974354 kubelet[2695]: W1218 11:04:47.974344 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.974354 kubelet[2695]: E1218 11:04:47.974356 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.988672 kubelet[2695]: E1218 11:04:47.988621 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.988672 kubelet[2695]: W1218 11:04:47.988653 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.988672 kubelet[2695]: E1218 11:04:47.988669 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.988846 kubelet[2695]: I1218 11:04:47.988694 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72p6\" (UniqueName: \"kubernetes.io/projected/de13faa1-4005-4e4c-bebe-9b34acc642ce-kube-api-access-z72p6\") pod \"csi-node-driver-tdf8q\" (UID: \"de13faa1-4005-4e4c-bebe-9b34acc642ce\") " pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:47.989754 kubelet[2695]: E1218 11:04:47.988890 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.989754 kubelet[2695]: W1218 11:04:47.988903 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.989754 kubelet[2695]: E1218 11:04:47.988925 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.989754 kubelet[2695]: I1218 11:04:47.988939 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de13faa1-4005-4e4c-bebe-9b34acc642ce-socket-dir\") pod \"csi-node-driver-tdf8q\" (UID: \"de13faa1-4005-4e4c-bebe-9b34acc642ce\") " pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:47.989754 kubelet[2695]: E1218 11:04:47.989114 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.989754 kubelet[2695]: W1218 11:04:47.989137 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.989754 kubelet[2695]: E1218 11:04:47.989156 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.989754 kubelet[2695]: I1218 11:04:47.989183 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de13faa1-4005-4e4c-bebe-9b34acc642ce-kubelet-dir\") pod \"csi-node-driver-tdf8q\" (UID: \"de13faa1-4005-4e4c-bebe-9b34acc642ce\") " pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:47.989754 kubelet[2695]: E1218 11:04:47.989389 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.989977 kubelet[2695]: W1218 11:04:47.989399 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.989977 kubelet[2695]: E1218 11:04:47.989416 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.989977 kubelet[2695]: I1218 11:04:47.989430 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de13faa1-4005-4e4c-bebe-9b34acc642ce-registration-dir\") pod \"csi-node-driver-tdf8q\" (UID: \"de13faa1-4005-4e4c-bebe-9b34acc642ce\") " pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:47.989977 kubelet[2695]: E1218 11:04:47.989610 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.989977 kubelet[2695]: W1218 11:04:47.989619 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.989977 kubelet[2695]: E1218 11:04:47.989641 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.989977 kubelet[2695]: E1218 11:04:47.989804 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.989977 kubelet[2695]: W1218 11:04:47.989811 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.989977 kubelet[2695]: E1218 11:04:47.989825 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.990134 kubelet[2695]: E1218 11:04:47.989983 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.990134 kubelet[2695]: W1218 11:04:47.989994 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.990134 kubelet[2695]: E1218 11:04:47.990094 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.990214 kubelet[2695]: E1218 11:04:47.990142 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.990214 kubelet[2695]: W1218 11:04:47.990149 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.990214 kubelet[2695]: E1218 11:04:47.990174 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.990316 kubelet[2695]: E1218 11:04:47.990310 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.990336 kubelet[2695]: W1218 11:04:47.990320 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.990357 kubelet[2695]: E1218 11:04:47.990345 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.990509 kubelet[2695]: E1218 11:04:47.990489 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.990509 kubelet[2695]: W1218 11:04:47.990500 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.990560 kubelet[2695]: E1218 11:04:47.990515 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.990560 kubelet[2695]: I1218 11:04:47.990533 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de13faa1-4005-4e4c-bebe-9b34acc642ce-varrun\") pod \"csi-node-driver-tdf8q\" (UID: \"de13faa1-4005-4e4c-bebe-9b34acc642ce\") " pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:47.990697 kubelet[2695]: E1218 11:04:47.990680 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.990697 kubelet[2695]: W1218 11:04:47.990692 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.990779 kubelet[2695]: E1218 11:04:47.990701 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.991060 kubelet[2695]: E1218 11:04:47.991031 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.991060 kubelet[2695]: W1218 11:04:47.991054 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.991134 kubelet[2695]: E1218 11:04:47.991070 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.991246 kubelet[2695]: E1218 11:04:47.991228 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.991246 kubelet[2695]: W1218 11:04:47.991240 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.991305 kubelet[2695]: E1218 11:04:47.991249 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.991520 kubelet[2695]: E1218 11:04:47.991498 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.991520 kubelet[2695]: W1218 11:04:47.991516 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.991584 kubelet[2695]: E1218 11:04:47.991527 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.992742 kubelet[2695]: E1218 11:04:47.991773 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:47.992742 kubelet[2695]: W1218 11:04:47.991788 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:47.992742 kubelet[2695]: E1218 11:04:47.991799 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:47.997333 containerd[1520]: time="2025-12-18T11:04:47.997302905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f49666d5-nrwc5,Uid:d8384adc-9bcd-498f-925a-a4a73fe11213,Namespace:calico-system,Attempt:0,} returns sandbox id \"b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6\"" Dec 18 11:04:48.000332 kubelet[2695]: E1218 11:04:48.000308 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:48.004607 containerd[1520]: time="2025-12-18T11:04:48.004567632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 18 11:04:48.071100 kubelet[2695]: E1218 11:04:48.070960 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:48.071483 containerd[1520]: time="2025-12-18T11:04:48.071423218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x69c7,Uid:05cfa3ae-36f8-4ba2-8612-234a3678df28,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:48.092263 kubelet[2695]: E1218 11:04:48.092192 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.092263 kubelet[2695]: W1218 11:04:48.092237 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.092263 kubelet[2695]: E1218 11:04:48.092257 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.092553 kubelet[2695]: E1218 11:04:48.092445 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.092553 kubelet[2695]: W1218 11:04:48.092458 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.092553 kubelet[2695]: E1218 11:04:48.092467 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.093054 kubelet[2695]: E1218 11:04:48.092906 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.093054 kubelet[2695]: W1218 11:04:48.092926 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.093054 kubelet[2695]: E1218 11:04:48.092941 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.093388 kubelet[2695]: E1218 11:04:48.093130 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.093388 kubelet[2695]: W1218 11:04:48.093145 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.093388 kubelet[2695]: E1218 11:04:48.093175 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.093609 kubelet[2695]: E1218 11:04:48.093538 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.093609 kubelet[2695]: W1218 11:04:48.093551 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.093609 kubelet[2695]: E1218 11:04:48.093569 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.094082 kubelet[2695]: E1218 11:04:48.094067 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.094263 kubelet[2695]: W1218 11:04:48.094153 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.094263 kubelet[2695]: E1218 11:04:48.094230 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.094423 kubelet[2695]: E1218 11:04:48.094368 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.094423 kubelet[2695]: W1218 11:04:48.094380 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.094423 kubelet[2695]: E1218 11:04:48.094407 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.094654 kubelet[2695]: E1218 11:04:48.094640 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.094775 kubelet[2695]: W1218 11:04:48.094710 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.094775 kubelet[2695]: E1218 11:04:48.094754 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.095827 kubelet[2695]: E1218 11:04:48.095670 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.095827 kubelet[2695]: W1218 11:04:48.095685 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.095827 kubelet[2695]: E1218 11:04:48.095757 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.096100 kubelet[2695]: E1218 11:04:48.095990 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.096100 kubelet[2695]: W1218 11:04:48.096004 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.096100 kubelet[2695]: E1218 11:04:48.096031 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.096302 kubelet[2695]: E1218 11:04:48.096245 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.096302 kubelet[2695]: W1218 11:04:48.096258 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.096302 kubelet[2695]: E1218 11:04:48.096281 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.096526 kubelet[2695]: E1218 11:04:48.096513 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.096644 containerd[1520]: time="2025-12-18T11:04:48.096614963Z" level=info msg="connecting to shim 2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2" address="unix:///run/containerd/s/0cc41a8ad252688775abdddc3fc41a4e826ba344c7763a21ef17a73741c3f0d6" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:04:48.096782 kubelet[2695]: W1218 11:04:48.096706 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.096782 kubelet[2695]: E1218 11:04:48.096759 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.097362 kubelet[2695]: E1218 11:04:48.097345 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.097542 kubelet[2695]: W1218 11:04:48.097428 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.097542 kubelet[2695]: E1218 11:04:48.097470 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.097679 kubelet[2695]: E1218 11:04:48.097666 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.097835 kubelet[2695]: W1218 11:04:48.097743 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.097835 kubelet[2695]: E1218 11:04:48.097772 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.098033 kubelet[2695]: E1218 11:04:48.097939 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.098033 kubelet[2695]: W1218 11:04:48.097951 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.098033 kubelet[2695]: E1218 11:04:48.097974 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.098245 kubelet[2695]: E1218 11:04:48.098191 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.098245 kubelet[2695]: W1218 11:04:48.098204 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.098245 kubelet[2695]: E1218 11:04:48.098228 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.098779 kubelet[2695]: E1218 11:04:48.098600 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.098779 kubelet[2695]: W1218 11:04:48.098616 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.098779 kubelet[2695]: E1218 11:04:48.098638 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.099032 kubelet[2695]: E1218 11:04:48.099013 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.099094 kubelet[2695]: W1218 11:04:48.099082 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.099264 kubelet[2695]: E1218 11:04:48.099215 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.101094 kubelet[2695]: E1218 11:04:48.100792 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.101094 kubelet[2695]: W1218 11:04:48.100809 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.101094 kubelet[2695]: E1218 11:04:48.100835 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.101623 kubelet[2695]: E1218 11:04:48.101277 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.101623 kubelet[2695]: W1218 11:04:48.101292 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.101623 kubelet[2695]: E1218 11:04:48.101308 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.101937 kubelet[2695]: E1218 11:04:48.101796 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.101937 kubelet[2695]: W1218 11:04:48.101811 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.101937 kubelet[2695]: E1218 11:04:48.101835 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.102248 kubelet[2695]: E1218 11:04:48.102097 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.102248 kubelet[2695]: W1218 11:04:48.102109 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.102248 kubelet[2695]: E1218 11:04:48.102144 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.102554 kubelet[2695]: E1218 11:04:48.102409 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.102554 kubelet[2695]: W1218 11:04:48.102423 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.102554 kubelet[2695]: E1218 11:04:48.102451 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.102861 kubelet[2695]: E1218 11:04:48.102769 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.102861 kubelet[2695]: W1218 11:04:48.102790 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.102861 kubelet[2695]: E1218 11:04:48.102820 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.103606 kubelet[2695]: E1218 11:04:48.103552 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.103606 kubelet[2695]: W1218 11:04:48.103568 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.103606 kubelet[2695]: E1218 11:04:48.103580 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.111949 kubelet[2695]: E1218 11:04:48.111894 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:48.111949 kubelet[2695]: W1218 11:04:48.111946 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:48.112039 kubelet[2695]: E1218 11:04:48.111964 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:48.126932 systemd[1]: Started cri-containerd-2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2.scope - libcontainer container 2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2. Dec 18 11:04:48.137000 audit: BPF prog-id=132 op=LOAD Dec 18 11:04:48.138000 audit: BPF prog-id=133 op=LOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=133 op=UNLOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=134 op=LOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=135 op=LOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=135 op=UNLOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=134 op=UNLOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.138000 audit: BPF prog-id=136 op=LOAD Dec 18 11:04:48.138000 audit[3274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3237 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313161356138613531333832303463326230393234326136363762 Dec 18 11:04:48.152396 containerd[1520]: time="2025-12-18T11:04:48.152350537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x69c7,Uid:05cfa3ae-36f8-4ba2-8612-234a3678df28,Namespace:calico-system,Attempt:0,} returns sandbox id \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\"" Dec 18 11:04:48.153048 kubelet[2695]: E1218 11:04:48.153028 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:48.488000 audit[3302]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:48.488000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe3de4e10 a2=0 a3=1 items=0 ppid=2850 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:48.498000 audit[3302]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:04:48.498000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3de4e10 a2=0 a3=1 items=0 ppid=2850 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:48.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:04:49.018599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553744255.mount: Deactivated successfully. Dec 18 11:04:49.855897 kubelet[2695]: E1218 11:04:49.855850 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:04:49.987773 containerd[1520]: time="2025-12-18T11:04:49.987545482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:49.991310 containerd[1520]: time="2025-12-18T11:04:49.991247245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 18 11:04:49.995983 containerd[1520]: time="2025-12-18T11:04:49.995947530Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:50.006830 containerd[1520]: time="2025-12-18T11:04:50.006708539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:50.007449 containerd[1520]: time="2025-12-18T11:04:50.007311980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.002702468s" Dec 18 11:04:50.007449 containerd[1520]: time="2025-12-18T11:04:50.007348700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 18 11:04:50.009643 containerd[1520]: time="2025-12-18T11:04:50.009616742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 18 11:04:50.024331 containerd[1520]: time="2025-12-18T11:04:50.024296635Z" level=info msg="CreateContainer within sandbox \"b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 18 11:04:50.047627 containerd[1520]: time="2025-12-18T11:04:50.047514135Z" level=info msg="Container a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:50.072021 containerd[1520]: time="2025-12-18T11:04:50.071965476Z" level=info msg="CreateContainer within sandbox \"b91d42f419c1256d7e2f468328be45402e125dfc585d534d94c721b5338e8dc6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1\"" Dec 18 11:04:50.073982 containerd[1520]: time="2025-12-18T11:04:50.072637076Z" level=info msg="StartContainer for \"a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1\"" Dec 18 11:04:50.074134 containerd[1520]: time="2025-12-18T11:04:50.074103358Z" level=info msg="connecting to shim a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1" address="unix:///run/containerd/s/b0e467b03a85c4616ad6c983b20d4a958c8e93d622d817b0ac7f3c8a5db34e78" protocol=ttrpc version=3 Dec 18 11:04:50.103949 systemd[1]: Started cri-containerd-a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1.scope - libcontainer container a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1. Dec 18 11:04:50.114000 audit: BPF prog-id=137 op=LOAD Dec 18 11:04:50.114000 audit: BPF prog-id=138 op=LOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=138 op=UNLOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=139 op=LOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=140 op=LOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=140 op=UNLOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=139 op=UNLOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.114000 audit: BPF prog-id=141 op=LOAD Dec 18 11:04:50.114000 audit[3314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3145 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:50.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133313931306664396433333862663333366337393836666663366537 Dec 18 11:04:50.153876 containerd[1520]: time="2025-12-18T11:04:50.153827027Z" level=info msg="StartContainer for \"a31910fd9d338bf336c7986ffc6e7f5ee2823e8bb47250b88b42f8dc771690a1\" returns successfully" Dec 18 11:04:50.942616 kubelet[2695]: E1218 11:04:50.942518 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:50.958247 kubelet[2695]: I1218 11:04:50.958186 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85f49666d5-nrwc5" podStartSLOduration=1.949912329 podStartE2EDuration="3.958170962s" podCreationTimestamp="2025-12-18 11:04:47 +0000 UTC" firstStartedPulling="2025-12-18 11:04:48.001214069 +0000 UTC m=+24.233735927" lastFinishedPulling="2025-12-18 11:04:50.009472742 +0000 UTC m=+26.241994560" observedRunningTime="2025-12-18 11:04:50.958048522 +0000 UTC m=+27.190570380" watchObservedRunningTime="2025-12-18 11:04:50.958170962 +0000 UTC m=+27.190692820" Dec 18 11:04:50.995867 kubelet[2695]: E1218 11:04:50.995785 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.995867 kubelet[2695]: W1218 11:04:50.995866 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996013 kubelet[2695]: E1218 11:04:50.995906 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.996101 kubelet[2695]: E1218 11:04:50.996085 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.996101 kubelet[2695]: W1218 11:04:50.996097 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996172 kubelet[2695]: E1218 11:04:50.996106 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.996258 kubelet[2695]: E1218 11:04:50.996247 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.996258 kubelet[2695]: W1218 11:04:50.996257 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996308 kubelet[2695]: E1218 11:04:50.996291 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.996520 kubelet[2695]: E1218 11:04:50.996506 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.996520 kubelet[2695]: W1218 11:04:50.996518 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996580 kubelet[2695]: E1218 11:04:50.996528 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.996691 kubelet[2695]: E1218 11:04:50.996680 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.996741 kubelet[2695]: W1218 11:04:50.996691 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996741 kubelet[2695]: E1218 11:04:50.996699 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.996879 kubelet[2695]: E1218 11:04:50.996866 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.996879 kubelet[2695]: W1218 11:04:50.996878 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.996928 kubelet[2695]: E1218 11:04:50.996887 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997032 kubelet[2695]: E1218 11:04:50.997019 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997032 kubelet[2695]: W1218 11:04:50.997030 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997074 kubelet[2695]: E1218 11:04:50.997040 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997185 kubelet[2695]: E1218 11:04:50.997174 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997185 kubelet[2695]: W1218 11:04:50.997184 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997230 kubelet[2695]: E1218 11:04:50.997193 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997343 kubelet[2695]: E1218 11:04:50.997332 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997343 kubelet[2695]: W1218 11:04:50.997342 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997387 kubelet[2695]: E1218 11:04:50.997350 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997476 kubelet[2695]: E1218 11:04:50.997466 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997476 kubelet[2695]: W1218 11:04:50.997475 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997543 kubelet[2695]: E1218 11:04:50.997483 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997654 kubelet[2695]: E1218 11:04:50.997638 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997654 kubelet[2695]: W1218 11:04:50.997652 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997698 kubelet[2695]: E1218 11:04:50.997662 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997834 kubelet[2695]: E1218 11:04:50.997820 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997882 kubelet[2695]: W1218 11:04:50.997834 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.997882 kubelet[2695]: E1218 11:04:50.997843 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.997994 kubelet[2695]: E1218 11:04:50.997983 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.997994 kubelet[2695]: W1218 11:04:50.997993 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.998042 kubelet[2695]: E1218 11:04:50.998000 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.998137 kubelet[2695]: E1218 11:04:50.998126 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.998137 kubelet[2695]: W1218 11:04:50.998135 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.998198 kubelet[2695]: E1218 11:04:50.998142 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:50.998363 kubelet[2695]: E1218 11:04:50.998350 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:50.998363 kubelet[2695]: W1218 11:04:50.998361 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:50.998418 kubelet[2695]: E1218 11:04:50.998370 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.013558 containerd[1520]: time="2025-12-18T11:04:51.013500489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:51.016610 containerd[1520]: time="2025-12-18T11:04:51.016546572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 18 11:04:51.017653 containerd[1520]: time="2025-12-18T11:04:51.017625692Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:51.017800 kubelet[2695]: E1218 11:04:51.017783 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.017840 kubelet[2695]: W1218 11:04:51.017801 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.018042 kubelet[2695]: E1218 11:04:51.017817 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.018294 kubelet[2695]: E1218 11:04:51.018254 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.018294 kubelet[2695]: W1218 11:04:51.018270 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.018294 kubelet[2695]: E1218 11:04:51.018281 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.019677 kubelet[2695]: E1218 11:04:51.019643 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.019748 kubelet[2695]: W1218 11:04:51.019732 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.019778 kubelet[2695]: E1218 11:04:51.019753 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.020904 containerd[1520]: time="2025-12-18T11:04:51.020868615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:51.021867 containerd[1520]: time="2025-12-18T11:04:51.021836296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.012189794s" Dec 18 11:04:51.021902 containerd[1520]: time="2025-12-18T11:04:51.021873016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 18 11:04:51.022931 kubelet[2695]: E1218 11:04:51.022809 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.022931 kubelet[2695]: W1218 11:04:51.022827 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.022931 kubelet[2695]: E1218 11:04:51.022869 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.023102 kubelet[2695]: E1218 11:04:51.023082 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.023198 kubelet[2695]: W1218 11:04:51.023142 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.023497 kubelet[2695]: E1218 11:04:51.023391 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.023497 kubelet[2695]: W1218 11:04:51.023402 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.023608 kubelet[2695]: E1218 11:04:51.023583 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.023743 kubelet[2695]: E1218 11:04:51.023713 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.023798 kubelet[2695]: W1218 11:04:51.023786 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.023850 kubelet[2695]: E1218 11:04:51.023839 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024097 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024203 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025278 kubelet[2695]: W1218 11:04:51.024259 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024275 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024485 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025278 kubelet[2695]: W1218 11:04:51.024494 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024509 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024765 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025278 kubelet[2695]: W1218 11:04:51.024779 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025278 kubelet[2695]: E1218 11:04:51.024823 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025515 kubelet[2695]: E1218 11:04:51.024990 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025515 kubelet[2695]: W1218 11:04:51.025000 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025515 kubelet[2695]: E1218 11:04:51.025018 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025515 kubelet[2695]: E1218 11:04:51.025253 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025515 kubelet[2695]: W1218 11:04:51.025268 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025515 kubelet[2695]: E1218 11:04:51.025282 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.025515 kubelet[2695]: E1218 11:04:51.025501 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.025637 kubelet[2695]: W1218 11:04:51.025519 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.025637 kubelet[2695]: E1218 11:04:51.025532 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.025685 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.026374 kubelet[2695]: W1218 11:04:51.025792 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.025815 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.026053 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.026374 kubelet[2695]: W1218 11:04:51.026063 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.026086 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.026227 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.026374 kubelet[2695]: W1218 11:04:51.026235 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.026374 kubelet[2695]: E1218 11:04:51.026377 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.026610 kubelet[2695]: W1218 11:04:51.026390 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.026610 kubelet[2695]: E1218 11:04:51.026400 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.026610 kubelet[2695]: E1218 11:04:51.026421 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.026693 kubelet[2695]: E1218 11:04:51.026674 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 18 11:04:51.026740 kubelet[2695]: W1218 11:04:51.026692 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 18 11:04:51.026740 kubelet[2695]: E1218 11:04:51.026704 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 18 11:04:51.034176 containerd[1520]: time="2025-12-18T11:04:51.034133466Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 18 11:04:51.043915 containerd[1520]: time="2025-12-18T11:04:51.043863114Z" level=info msg="Container 311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:51.051784 containerd[1520]: time="2025-12-18T11:04:51.051743960Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c\"" Dec 18 11:04:51.052305 containerd[1520]: time="2025-12-18T11:04:51.052278441Z" level=info msg="StartContainer for \"311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c\"" Dec 18 11:04:51.054332 containerd[1520]: time="2025-12-18T11:04:51.054297122Z" level=info msg="connecting to shim 311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c" address="unix:///run/containerd/s/0cc41a8ad252688775abdddc3fc41a4e826ba344c7763a21ef17a73741c3f0d6" protocol=ttrpc version=3 Dec 18 11:04:51.078912 systemd[1]: Started cri-containerd-311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c.scope - libcontainer container 311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c. Dec 18 11:04:51.142000 audit: BPF prog-id=142 op=LOAD Dec 18 11:04:51.143837 kernel: kauditd_printk_skb: 86 callbacks suppressed Dec 18 11:04:51.143915 kernel: audit: type=1334 audit(1766055891.142:505): prog-id=142 op=LOAD Dec 18 11:04:51.142000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.147811 kernel: audit: type=1300 audit(1766055891.142:505): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.151024 kernel: audit: type=1327 audit(1766055891.142:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.142000 audit: BPF prog-id=143 op=LOAD Dec 18 11:04:51.151131 kernel: audit: type=1334 audit(1766055891.142:506): prog-id=143 op=LOAD Dec 18 11:04:51.142000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.155012 kernel: audit: type=1300 audit(1766055891.142:506): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.158698 kernel: audit: type=1327 audit(1766055891.142:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.143000 audit: BPF prog-id=143 op=UNLOAD Dec 18 11:04:51.143000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.162776 kernel: audit: type=1334 audit(1766055891.143:507): prog-id=143 op=UNLOAD Dec 18 11:04:51.162820 kernel: audit: type=1300 audit(1766055891.143:507): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.167872 kernel: audit: type=1327 audit(1766055891.143:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.143000 audit: BPF prog-id=142 op=UNLOAD Dec 18 11:04:51.168841 kernel: audit: type=1334 audit(1766055891.143:508): prog-id=142 op=UNLOAD Dec 18 11:04:51.143000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.143000 audit: BPF prog-id=144 op=LOAD Dec 18 11:04:51.143000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:51.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313439343434316634306461613533356437346463363333623166 Dec 18 11:04:51.178767 containerd[1520]: time="2025-12-18T11:04:51.178695743Z" level=info msg="StartContainer for \"311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c\" returns successfully" Dec 18 11:04:51.185829 systemd[1]: cri-containerd-311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c.scope: Deactivated successfully. Dec 18 11:04:51.191688 containerd[1520]: time="2025-12-18T11:04:51.191545713Z" level=info msg="received container exit event container_id:\"311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c\" id:\"311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c\" pid:3403 exited_at:{seconds:1766055891 nanos:190531193}" Dec 18 11:04:51.192000 audit: BPF prog-id=144 op=UNLOAD Dec 18 11:04:51.218117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-311494441f40daa535d74dc633b1facd1d2dd4fa318d4432d1c89b0398e80b3c-rootfs.mount: Deactivated successfully. Dec 18 11:04:51.856650 kubelet[2695]: E1218 11:04:51.856330 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:04:51.946071 kubelet[2695]: I1218 11:04:51.946039 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 18 11:04:51.946490 kubelet[2695]: E1218 11:04:51.946357 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:51.946520 kubelet[2695]: E1218 11:04:51.946487 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:51.948087 containerd[1520]: time="2025-12-18T11:04:51.947854646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 18 11:04:53.856887 kubelet[2695]: E1218 11:04:53.856847 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:04:53.919231 containerd[1520]: time="2025-12-18T11:04:53.919187143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:53.920342 containerd[1520]: time="2025-12-18T11:04:53.920302224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 18 11:04:53.921162 containerd[1520]: time="2025-12-18T11:04:53.921136225Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:53.923473 containerd[1520]: time="2025-12-18T11:04:53.923442267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:53.924301 containerd[1520]: time="2025-12-18T11:04:53.924027027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.976132701s" Dec 18 11:04:53.924301 containerd[1520]: time="2025-12-18T11:04:53.924060187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 18 11:04:53.925919 containerd[1520]: time="2025-12-18T11:04:53.925883148Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 18 11:04:53.936335 containerd[1520]: time="2025-12-18T11:04:53.936286436Z" level=info msg="Container 8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:53.939330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294530559.mount: Deactivated successfully. Dec 18 11:04:53.949796 containerd[1520]: time="2025-12-18T11:04:53.949016045Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef\"" Dec 18 11:04:53.950226 containerd[1520]: time="2025-12-18T11:04:53.950177966Z" level=info msg="StartContainer for \"8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef\"" Dec 18 11:04:53.951838 containerd[1520]: time="2025-12-18T11:04:53.951812407Z" level=info msg="connecting to shim 8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef" address="unix:///run/containerd/s/0cc41a8ad252688775abdddc3fc41a4e826ba344c7763a21ef17a73741c3f0d6" protocol=ttrpc version=3 Dec 18 11:04:53.980926 systemd[1]: Started cri-containerd-8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef.scope - libcontainer container 8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef. Dec 18 11:04:54.037000 audit: BPF prog-id=145 op=LOAD Dec 18 11:04:54.037000 audit[3451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3237 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363261646363633630323064316266363164353933353862643337 Dec 18 11:04:54.037000 audit: BPF prog-id=146 op=LOAD Dec 18 11:04:54.037000 audit[3451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3237 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363261646363633630323064316266363164353933353862643337 Dec 18 11:04:54.037000 audit: BPF prog-id=146 op=UNLOAD Dec 18 11:04:54.037000 audit[3451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363261646363633630323064316266363164353933353862643337 Dec 18 11:04:54.037000 audit: BPF prog-id=145 op=UNLOAD Dec 18 11:04:54.037000 audit[3451]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363261646363633630323064316266363164353933353862643337 Dec 18 11:04:54.037000 audit: BPF prog-id=147 op=LOAD Dec 18 11:04:54.037000 audit[3451]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3237 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363261646363633630323064316266363164353933353862643337 Dec 18 11:04:54.060115 containerd[1520]: time="2025-12-18T11:04:54.060077721Z" level=info msg="StartContainer for \"8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef\" returns successfully" Dec 18 11:04:54.582726 containerd[1520]: time="2025-12-18T11:04:54.582668270Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 18 11:04:54.584881 systemd[1]: cri-containerd-8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef.scope: Deactivated successfully. Dec 18 11:04:54.585219 systemd[1]: cri-containerd-8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef.scope: Consumed 465ms CPU time, 176.1M memory peak, 2.6M read from disk, 165.9M written to disk. Dec 18 11:04:54.588850 containerd[1520]: time="2025-12-18T11:04:54.588803594Z" level=info msg="received container exit event container_id:\"8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef\" id:\"8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef\" pid:3463 exited_at:{seconds:1766055894 nanos:588579874}" Dec 18 11:04:54.592000 audit: BPF prog-id=147 op=UNLOAD Dec 18 11:04:54.608937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8862adccc6020d1bf61d59358bd37cc5dea34901f33690dd296c3a600e7f3aef-rootfs.mount: Deactivated successfully. Dec 18 11:04:54.660666 kubelet[2695]: I1218 11:04:54.660035 2695 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 18 11:04:54.753590 systemd[1]: Created slice kubepods-besteffort-pod91434238_f80e_42f1_bb48_55395c65ff33.slice - libcontainer container kubepods-besteffort-pod91434238_f80e_42f1_bb48_55395c65ff33.slice. Dec 18 11:04:54.765931 systemd[1]: Created slice kubepods-besteffort-pod400efbcf_c6cb_4177_bb40_b857f3dc9989.slice - libcontainer container kubepods-besteffort-pod400efbcf_c6cb_4177_bb40_b857f3dc9989.slice. Dec 18 11:04:54.774461 systemd[1]: Created slice kubepods-burstable-pod52e2dbeb_b419_4df5_843b_cc3d0bbd23f2.slice - libcontainer container kubepods-burstable-pod52e2dbeb_b419_4df5_843b_cc3d0bbd23f2.slice. Dec 18 11:04:54.785886 systemd[1]: Created slice kubepods-burstable-pod762d304a_7fbc_4a89_8956_5d4f6addfbf8.slice - libcontainer container kubepods-burstable-pod762d304a_7fbc_4a89_8956_5d4f6addfbf8.slice. Dec 18 11:04:54.794516 systemd[1]: Created slice kubepods-besteffort-podbdeec125_ec5f_4e1e_9801_c884f349294d.slice - libcontainer container kubepods-besteffort-podbdeec125_ec5f_4e1e_9801_c884f349294d.slice. Dec 18 11:04:54.803927 systemd[1]: Created slice kubepods-besteffort-podccde4425_b596_4719_9d97_b9f665c9d1bc.slice - libcontainer container kubepods-besteffort-podccde4425_b596_4719_9d97_b9f665c9d1bc.slice. Dec 18 11:04:54.812322 systemd[1]: Created slice kubepods-besteffort-pod68f46b97_44ea_43d3_8c4b_06c74ab4d137.slice - libcontainer container kubepods-besteffort-pod68f46b97_44ea_43d3_8c4b_06c74ab4d137.slice. Dec 18 11:04:54.818624 systemd[1]: Created slice kubepods-besteffort-podfc3f310c_1439_4243_ade3_cd849e5460ff.slice - libcontainer container kubepods-besteffort-podfc3f310c_1439_4243_ade3_cd849e5460ff.slice. Dec 18 11:04:54.846988 kubelet[2695]: I1218 11:04:54.846874 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/400efbcf-c6cb-4177-bb40-b857f3dc9989-kube-api-access-85t57\") pod \"goldmane-666569f655-6j22f\" (UID: \"400efbcf-c6cb-4177-bb40-b857f3dc9989\") " pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:54.846988 kubelet[2695]: I1218 11:04:54.846921 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68f46b97-44ea-43d3-8c4b-06c74ab4d137-calico-apiserver-certs\") pod \"calico-apiserver-54fb4cdc46-f2s62\" (UID: \"68f46b97-44ea-43d3-8c4b-06c74ab4d137\") " pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" Dec 18 11:04:54.846988 kubelet[2695]: I1218 11:04:54.846942 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/400efbcf-c6cb-4177-bb40-b857f3dc9989-goldmane-ca-bundle\") pod \"goldmane-666569f655-6j22f\" (UID: \"400efbcf-c6cb-4177-bb40-b857f3dc9989\") " pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:54.846988 kubelet[2695]: I1218 11:04:54.846960 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8djv\" (UniqueName: \"kubernetes.io/projected/fc3f310c-1439-4243-ade3-cd849e5460ff-kube-api-access-p8djv\") pod \"calico-apiserver-68744c49b-dflk9\" (UID: \"fc3f310c-1439-4243-ade3-cd849e5460ff\") " pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" Dec 18 11:04:54.847812 kubelet[2695]: I1218 11:04:54.846995 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400efbcf-c6cb-4177-bb40-b857f3dc9989-config\") pod \"goldmane-666569f655-6j22f\" (UID: \"400efbcf-c6cb-4177-bb40-b857f3dc9989\") " pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:54.847812 kubelet[2695]: I1218 11:04:54.847019 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstfn\" (UniqueName: \"kubernetes.io/projected/bdeec125-ec5f-4e1e-9801-c884f349294d-kube-api-access-jstfn\") pod \"calico-apiserver-68744c49b-xhw45\" (UID: \"bdeec125-ec5f-4e1e-9801-c884f349294d\") " pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" Dec 18 11:04:54.847812 kubelet[2695]: I1218 11:04:54.847069 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc3f310c-1439-4243-ade3-cd849e5460ff-calico-apiserver-certs\") pod \"calico-apiserver-68744c49b-dflk9\" (UID: \"fc3f310c-1439-4243-ade3-cd849e5460ff\") " pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" Dec 18 11:04:54.847812 kubelet[2695]: I1218 11:04:54.847137 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg7gk\" (UniqueName: \"kubernetes.io/projected/68f46b97-44ea-43d3-8c4b-06c74ab4d137-kube-api-access-kg7gk\") pod \"calico-apiserver-54fb4cdc46-f2s62\" (UID: \"68f46b97-44ea-43d3-8c4b-06c74ab4d137\") " pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" Dec 18 11:04:54.847812 kubelet[2695]: I1218 11:04:54.847176 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/400efbcf-c6cb-4177-bb40-b857f3dc9989-goldmane-key-pair\") pod \"goldmane-666569f655-6j22f\" (UID: \"400efbcf-c6cb-4177-bb40-b857f3dc9989\") " pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:54.848083 kubelet[2695]: I1218 11:04:54.847196 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762d304a-7fbc-4a89-8956-5d4f6addfbf8-config-volume\") pod \"coredns-668d6bf9bc-snrcr\" (UID: \"762d304a-7fbc-4a89-8956-5d4f6addfbf8\") " pod="kube-system/coredns-668d6bf9bc-snrcr" Dec 18 11:04:54.848083 kubelet[2695]: I1218 11:04:54.847214 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-backend-key-pair\") pod \"whisker-6679c8cccc-xx6g4\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " pod="calico-system/whisker-6679c8cccc-xx6g4" Dec 18 11:04:54.848083 kubelet[2695]: I1218 11:04:54.847230 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91434238-f80e-42f1-bb48-55395c65ff33-tigera-ca-bundle\") pod \"calico-kube-controllers-779d74d86-dfx2b\" (UID: \"91434238-f80e-42f1-bb48-55395c65ff33\") " pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" Dec 18 11:04:54.848083 kubelet[2695]: I1218 11:04:54.847256 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bdeec125-ec5f-4e1e-9801-c884f349294d-calico-apiserver-certs\") pod \"calico-apiserver-68744c49b-xhw45\" (UID: \"bdeec125-ec5f-4e1e-9801-c884f349294d\") " pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" Dec 18 11:04:54.848083 kubelet[2695]: I1218 11:04:54.847303 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-ca-bundle\") pod \"whisker-6679c8cccc-xx6g4\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " pod="calico-system/whisker-6679c8cccc-xx6g4" Dec 18 11:04:54.848199 kubelet[2695]: I1218 11:04:54.847321 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlk6\" (UniqueName: \"kubernetes.io/projected/ccde4425-b596-4719-9d97-b9f665c9d1bc-kube-api-access-7zlk6\") pod \"whisker-6679c8cccc-xx6g4\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " pod="calico-system/whisker-6679c8cccc-xx6g4" Dec 18 11:04:54.848199 kubelet[2695]: I1218 11:04:54.847367 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprml\" (UniqueName: \"kubernetes.io/projected/52e2dbeb-b419-4df5-843b-cc3d0bbd23f2-kube-api-access-pprml\") pod \"coredns-668d6bf9bc-l4c7n\" (UID: \"52e2dbeb-b419-4df5-843b-cc3d0bbd23f2\") " pod="kube-system/coredns-668d6bf9bc-l4c7n" Dec 18 11:04:54.848199 kubelet[2695]: I1218 11:04:54.847385 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52e2dbeb-b419-4df5-843b-cc3d0bbd23f2-config-volume\") pod \"coredns-668d6bf9bc-l4c7n\" (UID: \"52e2dbeb-b419-4df5-843b-cc3d0bbd23f2\") " pod="kube-system/coredns-668d6bf9bc-l4c7n" Dec 18 11:04:54.848199 kubelet[2695]: I1218 11:04:54.847402 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2r7n\" (UniqueName: \"kubernetes.io/projected/91434238-f80e-42f1-bb48-55395c65ff33-kube-api-access-q2r7n\") pod \"calico-kube-controllers-779d74d86-dfx2b\" (UID: \"91434238-f80e-42f1-bb48-55395c65ff33\") " pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" Dec 18 11:04:54.848199 kubelet[2695]: I1218 11:04:54.847420 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzg6z\" (UniqueName: \"kubernetes.io/projected/762d304a-7fbc-4a89-8956-5d4f6addfbf8-kube-api-access-wzg6z\") pod \"coredns-668d6bf9bc-snrcr\" (UID: \"762d304a-7fbc-4a89-8956-5d4f6addfbf8\") " pod="kube-system/coredns-668d6bf9bc-snrcr" Dec 18 11:04:54.982776 kubelet[2695]: E1218 11:04:54.982584 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:54.987674 containerd[1520]: time="2025-12-18T11:04:54.987610981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 18 11:04:55.061667 containerd[1520]: time="2025-12-18T11:04:55.061610588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-779d74d86-dfx2b,Uid:91434238-f80e-42f1-bb48-55395c65ff33,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:55.071411 containerd[1520]: time="2025-12-18T11:04:55.071363274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6j22f,Uid:400efbcf-c6cb-4177-bb40-b857f3dc9989,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:55.077820 kubelet[2695]: E1218 11:04:55.077779 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:55.078204 containerd[1520]: time="2025-12-18T11:04:55.078155478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4c7n,Uid:52e2dbeb-b419-4df5-843b-cc3d0bbd23f2,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:55.092647 kubelet[2695]: E1218 11:04:55.092615 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:04:55.093287 containerd[1520]: time="2025-12-18T11:04:55.093257327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snrcr,Uid:762d304a-7fbc-4a89-8956-5d4f6addfbf8,Namespace:kube-system,Attempt:0,}" Dec 18 11:04:55.100895 containerd[1520]: time="2025-12-18T11:04:55.100648412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-xhw45,Uid:bdeec125-ec5f-4e1e-9801-c884f349294d,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:04:55.109940 containerd[1520]: time="2025-12-18T11:04:55.109879658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6679c8cccc-xx6g4,Uid:ccde4425-b596-4719-9d97-b9f665c9d1bc,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:55.119546 containerd[1520]: time="2025-12-18T11:04:55.119512104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb4cdc46-f2s62,Uid:68f46b97-44ea-43d3-8c4b-06c74ab4d137,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:04:55.122623 containerd[1520]: time="2025-12-18T11:04:55.122591386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-dflk9,Uid:fc3f310c-1439-4243-ade3-cd849e5460ff,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:04:55.266062 containerd[1520]: time="2025-12-18T11:04:55.264954355Z" level=error msg="Failed to destroy network for sandbox \"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.270767 containerd[1520]: time="2025-12-18T11:04:55.270700798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-779d74d86-dfx2b,Uid:91434238-f80e-42f1-bb48-55395c65ff33,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.273769 kubelet[2695]: E1218 11:04:55.273007 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.275961 containerd[1520]: time="2025-12-18T11:04:55.275913802Z" level=error msg="Failed to destroy network for sandbox \"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.277619 kubelet[2695]: E1218 11:04:55.277562 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" Dec 18 11:04:55.277701 kubelet[2695]: E1218 11:04:55.277631 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" Dec 18 11:04:55.277833 kubelet[2695]: E1218 11:04:55.277691 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-779d74d86-dfx2b_calico-system(91434238-f80e-42f1-bb48-55395c65ff33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-779d74d86-dfx2b_calico-system(91434238-f80e-42f1-bb48-55395c65ff33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"363a5bc1aa9908723a863727a41cea85fd4d5ad703693b6ef8be9e022cc77241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:04:55.277901 containerd[1520]: time="2025-12-18T11:04:55.277802563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6j22f,Uid:400efbcf-c6cb-4177-bb40-b857f3dc9989,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.278758 kubelet[2695]: E1218 11:04:55.278707 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.278807 kubelet[2695]: E1218 11:04:55.278773 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:55.278807 kubelet[2695]: E1218 11:04:55.278792 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6j22f" Dec 18 11:04:55.279119 kubelet[2695]: E1218 11:04:55.278828 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-6j22f_calico-system(400efbcf-c6cb-4177-bb40-b857f3dc9989)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-6j22f_calico-system(400efbcf-c6cb-4177-bb40-b857f3dc9989)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da0cffc27b32957ad9ea3f8e304b1e4720c31435c6c8aa59649572f0055fc69f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:04:55.281671 containerd[1520]: time="2025-12-18T11:04:55.281622485Z" level=error msg="Failed to destroy network for sandbox \"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.282853 containerd[1520]: time="2025-12-18T11:04:55.282819486Z" level=error msg="Failed to destroy network for sandbox \"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.283774 containerd[1520]: time="2025-12-18T11:04:55.283603327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4c7n,Uid:52e2dbeb-b419-4df5-843b-cc3d0bbd23f2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.284172 kubelet[2695]: E1218 11:04:55.283989 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.284172 kubelet[2695]: E1218 11:04:55.284066 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l4c7n" Dec 18 11:04:55.284172 kubelet[2695]: E1218 11:04:55.284095 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l4c7n" Dec 18 11:04:55.284300 kubelet[2695]: E1218 11:04:55.284131 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l4c7n_kube-system(52e2dbeb-b419-4df5-843b-cc3d0bbd23f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l4c7n_kube-system(52e2dbeb-b419-4df5-843b-cc3d0bbd23f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c85cde38e2768e5d736a271447786370b3a439f244377914f33d04ff74a11cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l4c7n" podUID="52e2dbeb-b419-4df5-843b-cc3d0bbd23f2" Dec 18 11:04:55.286700 containerd[1520]: time="2025-12-18T11:04:55.285830488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-dflk9,Uid:fc3f310c-1439-4243-ade3-cd849e5460ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.286813 kubelet[2695]: E1218 11:04:55.286518 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.286813 kubelet[2695]: E1218 11:04:55.286561 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" Dec 18 11:04:55.287774 containerd[1520]: time="2025-12-18T11:04:55.287744369Z" level=error msg="Failed to destroy network for sandbox \"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.289069 containerd[1520]: time="2025-12-18T11:04:55.289031170Z" level=error msg="Failed to destroy network for sandbox \"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.290483 kubelet[2695]: E1218 11:04:55.286579 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" Dec 18 11:04:55.290544 kubelet[2695]: E1218 11:04:55.290505 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68744c49b-dflk9_calico-apiserver(fc3f310c-1439-4243-ade3-cd849e5460ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68744c49b-dflk9_calico-apiserver(fc3f310c-1439-4243-ade3-cd849e5460ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"396b64eef3919529a4d527d26db020842fef897aee5461a66cca3bb1f5569cf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:04:55.291297 containerd[1520]: time="2025-12-18T11:04:55.291227571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6679c8cccc-xx6g4,Uid:ccde4425-b596-4719-9d97-b9f665c9d1bc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.291450 kubelet[2695]: E1218 11:04:55.291405 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.291450 kubelet[2695]: E1218 11:04:55.291442 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6679c8cccc-xx6g4" Dec 18 11:04:55.291507 kubelet[2695]: E1218 11:04:55.291459 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6679c8cccc-xx6g4" Dec 18 11:04:55.291507 kubelet[2695]: E1218 11:04:55.291486 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6679c8cccc-xx6g4_calico-system(ccde4425-b596-4719-9d97-b9f665c9d1bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6679c8cccc-xx6g4_calico-system(ccde4425-b596-4719-9d97-b9f665c9d1bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f622061a0dda6d8f75befd8dda2a03346700b19db25168ce5dea8f6e3e630f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6679c8cccc-xx6g4" podUID="ccde4425-b596-4719-9d97-b9f665c9d1bc" Dec 18 11:04:55.293191 containerd[1520]: time="2025-12-18T11:04:55.293125133Z" level=error msg="Failed to destroy network for sandbox \"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.295418 containerd[1520]: time="2025-12-18T11:04:55.295330654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-xhw45,Uid:bdeec125-ec5f-4e1e-9801-c884f349294d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.295849 kubelet[2695]: E1218 11:04:55.295701 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.296097 kubelet[2695]: E1218 11:04:55.295977 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" Dec 18 11:04:55.296097 kubelet[2695]: E1218 11:04:55.296005 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" Dec 18 11:04:55.296097 kubelet[2695]: E1218 11:04:55.296049 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68744c49b-xhw45_calico-apiserver(bdeec125-ec5f-4e1e-9801-c884f349294d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68744c49b-xhw45_calico-apiserver(bdeec125-ec5f-4e1e-9801-c884f349294d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aad98bb73bc6110e6306eb909709d0cb5c02527805c917c68cb18e7c567875ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:04:55.296864 containerd[1520]: time="2025-12-18T11:04:55.296785575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb4cdc46-f2s62,Uid:68f46b97-44ea-43d3-8c4b-06c74ab4d137,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.296974 kubelet[2695]: E1218 11:04:55.296943 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.297021 kubelet[2695]: E1218 11:04:55.296984 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" Dec 18 11:04:55.297021 kubelet[2695]: E1218 11:04:55.297000 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" Dec 18 11:04:55.297067 kubelet[2695]: E1218 11:04:55.297026 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54fb4cdc46-f2s62_calico-apiserver(68f46b97-44ea-43d3-8c4b-06c74ab4d137)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54fb4cdc46-f2s62_calico-apiserver(68f46b97-44ea-43d3-8c4b-06c74ab4d137)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee09732e4270a848baf26f5f1d3ba395f54bdd74d15456e30e811ba584991790\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:04:55.299955 containerd[1520]: time="2025-12-18T11:04:55.299918337Z" level=error msg="Failed to destroy network for sandbox \"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.302759 containerd[1520]: time="2025-12-18T11:04:55.302705899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snrcr,Uid:762d304a-7fbc-4a89-8956-5d4f6addfbf8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.303044 kubelet[2695]: E1218 11:04:55.302988 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.303184 kubelet[2695]: E1218 11:04:55.303105 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snrcr" Dec 18 11:04:55.303184 kubelet[2695]: E1218 11:04:55.303155 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snrcr" Dec 18 11:04:55.303347 kubelet[2695]: E1218 11:04:55.303318 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-snrcr_kube-system(762d304a-7fbc-4a89-8956-5d4f6addfbf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-snrcr_kube-system(762d304a-7fbc-4a89-8956-5d4f6addfbf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"153be83066bf88d4f774b4226396a06c052cc4a65c0f86eee4743fa22b2362d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-snrcr" podUID="762d304a-7fbc-4a89-8956-5d4f6addfbf8" Dec 18 11:04:55.863031 systemd[1]: Created slice kubepods-besteffort-podde13faa1_4005_4e4c_bebe_9b34acc642ce.slice - libcontainer container kubepods-besteffort-podde13faa1_4005_4e4c_bebe_9b34acc642ce.slice. Dec 18 11:04:55.865705 containerd[1520]: time="2025-12-18T11:04:55.865666571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdf8q,Uid:de13faa1-4005-4e4c-bebe-9b34acc642ce,Namespace:calico-system,Attempt:0,}" Dec 18 11:04:55.916426 containerd[1520]: time="2025-12-18T11:04:55.916333883Z" level=error msg="Failed to destroy network for sandbox \"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.918512 containerd[1520]: time="2025-12-18T11:04:55.918467364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdf8q,Uid:de13faa1-4005-4e4c-bebe-9b34acc642ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.918809 kubelet[2695]: E1218 11:04:55.918684 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 18 11:04:55.918809 kubelet[2695]: E1218 11:04:55.918760 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:55.918809 kubelet[2695]: E1218 11:04:55.918780 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdf8q" Dec 18 11:04:55.918936 kubelet[2695]: E1218 11:04:55.918842 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb0c645ff24ff17dd6d243befee5276804727360334f48fee700815b99b8a8de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:04:58.769791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14365199.mount: Deactivated successfully. Dec 18 11:04:58.906126 containerd[1520]: time="2025-12-18T11:04:58.906039420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 18 11:04:58.906539 containerd[1520]: time="2025-12-18T11:04:58.906157540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:58.906970 containerd[1520]: time="2025-12-18T11:04:58.906946780Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:58.908869 containerd[1520]: time="2025-12-18T11:04:58.908836141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 18 11:04:58.909450 containerd[1520]: time="2025-12-18T11:04:58.909332982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.921681201s" Dec 18 11:04:58.909450 containerd[1520]: time="2025-12-18T11:04:58.909370822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 18 11:04:58.919570 containerd[1520]: time="2025-12-18T11:04:58.917559546Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 18 11:04:58.930763 containerd[1520]: time="2025-12-18T11:04:58.930725633Z" level=info msg="Container 802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:04:58.940582 containerd[1520]: time="2025-12-18T11:04:58.940521838Z" level=info msg="CreateContainer within sandbox \"2811a5a8a5138204c2b09242a667b3de3724edadce736eab45038f9d3d2adde2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502\"" Dec 18 11:04:58.941035 containerd[1520]: time="2025-12-18T11:04:58.941001078Z" level=info msg="StartContainer for \"802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502\"" Dec 18 11:04:58.943000 containerd[1520]: time="2025-12-18T11:04:58.942942079Z" level=info msg="connecting to shim 802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502" address="unix:///run/containerd/s/0cc41a8ad252688775abdddc3fc41a4e826ba344c7763a21ef17a73741c3f0d6" protocol=ttrpc version=3 Dec 18 11:04:58.967944 systemd[1]: Started cri-containerd-802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502.scope - libcontainer container 802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502. Dec 18 11:04:59.012000 audit: BPF prog-id=148 op=LOAD Dec 18 11:04:59.014083 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 18 11:04:59.014137 kernel: audit: type=1334 audit(1766055899.012:517): prog-id=148 op=LOAD Dec 18 11:04:59.012000 audit[3806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.018247 kernel: audit: type=1300 audit(1766055899.012:517): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.018271 kernel: audit: type=1327 audit(1766055899.012:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.012000 audit: BPF prog-id=149 op=LOAD Dec 18 11:04:59.012000 audit[3806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.022611 kernel: audit: type=1334 audit(1766055899.012:518): prog-id=149 op=LOAD Dec 18 11:04:59.022635 kernel: audit: type=1300 audit(1766055899.012:518): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.029393 kernel: audit: type=1327 audit(1766055899.012:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.012000 audit: BPF prog-id=149 op=UNLOAD Dec 18 11:04:59.029582 kernel: audit: type=1334 audit(1766055899.012:519): prog-id=149 op=UNLOAD Dec 18 11:04:59.012000 audit[3806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.033791 kernel: audit: type=1300 audit(1766055899.012:519): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.033882 kernel: audit: type=1327 audit(1766055899.012:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.012000 audit: BPF prog-id=148 op=UNLOAD Dec 18 11:04:59.012000 audit[3806]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.012000 audit: BPF prog-id=150 op=LOAD Dec 18 11:04:59.012000 audit[3806]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3237 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:04:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326430303538366339626130633139666637636538386231366536 Dec 18 11:04:59.037978 kernel: audit: type=1334 audit(1766055899.012:520): prog-id=148 op=UNLOAD Dec 18 11:04:59.053005 containerd[1520]: time="2025-12-18T11:04:59.052969934Z" level=info msg="StartContainer for \"802d00586c9ba0c19ff7ce88b16e685e81bee8ca7eac13c972a2829b6cd87502\" returns successfully" Dec 18 11:04:59.168066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 18 11:04:59.168208 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 18 11:04:59.386794 kubelet[2695]: I1218 11:04:59.386638 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-ca-bundle\") pod \"ccde4425-b596-4719-9d97-b9f665c9d1bc\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " Dec 18 11:04:59.386794 kubelet[2695]: I1218 11:04:59.386687 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zlk6\" (UniqueName: \"kubernetes.io/projected/ccde4425-b596-4719-9d97-b9f665c9d1bc-kube-api-access-7zlk6\") pod \"ccde4425-b596-4719-9d97-b9f665c9d1bc\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " Dec 18 11:04:59.386794 kubelet[2695]: I1218 11:04:59.386738 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-backend-key-pair\") pod \"ccde4425-b596-4719-9d97-b9f665c9d1bc\" (UID: \"ccde4425-b596-4719-9d97-b9f665c9d1bc\") " Dec 18 11:04:59.410305 kubelet[2695]: I1218 11:04:59.410139 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ccde4425-b596-4719-9d97-b9f665c9d1bc" (UID: "ccde4425-b596-4719-9d97-b9f665c9d1bc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 18 11:04:59.412832 kubelet[2695]: I1218 11:04:59.412784 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ccde4425-b596-4719-9d97-b9f665c9d1bc" (UID: "ccde4425-b596-4719-9d97-b9f665c9d1bc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 18 11:04:59.416574 kubelet[2695]: I1218 11:04:59.416531 2695 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccde4425-b596-4719-9d97-b9f665c9d1bc-kube-api-access-7zlk6" (OuterVolumeSpecName: "kube-api-access-7zlk6") pod "ccde4425-b596-4719-9d97-b9f665c9d1bc" (UID: "ccde4425-b596-4719-9d97-b9f665c9d1bc"). InnerVolumeSpecName "kube-api-access-7zlk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 18 11:04:59.487474 kubelet[2695]: I1218 11:04:59.487421 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zlk6\" (UniqueName: \"kubernetes.io/projected/ccde4425-b596-4719-9d97-b9f665c9d1bc-kube-api-access-7zlk6\") on node \"localhost\" DevicePath \"\"" Dec 18 11:04:59.487474 kubelet[2695]: I1218 11:04:59.487454 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 18 11:04:59.487474 kubelet[2695]: I1218 11:04:59.487464 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccde4425-b596-4719-9d97-b9f665c9d1bc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 18 11:04:59.770653 systemd[1]: var-lib-kubelet-pods-ccde4425\x2db596\x2d4719\x2d9d97\x2db9f665c9d1bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7zlk6.mount: Deactivated successfully. Dec 18 11:04:59.770745 systemd[1]: var-lib-kubelet-pods-ccde4425\x2db596\x2d4719\x2d9d97\x2db9f665c9d1bc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 18 11:04:59.863076 systemd[1]: Removed slice kubepods-besteffort-podccde4425_b596_4719_9d97_b9f665c9d1bc.slice - libcontainer container kubepods-besteffort-podccde4425_b596_4719_9d97_b9f665c9d1bc.slice. Dec 18 11:05:00.000020 kubelet[2695]: E1218 11:04:59.999981 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:00.021778 kubelet[2695]: I1218 11:05:00.021630 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x69c7" podStartSLOduration=2.259073436 podStartE2EDuration="13.015653639s" podCreationTimestamp="2025-12-18 11:04:47 +0000 UTC" firstStartedPulling="2025-12-18 11:04:48.153687019 +0000 UTC m=+24.386208877" lastFinishedPulling="2025-12-18 11:04:58.910267222 +0000 UTC m=+35.142789080" observedRunningTime="2025-12-18 11:05:00.015070679 +0000 UTC m=+36.247592537" watchObservedRunningTime="2025-12-18 11:05:00.015653639 +0000 UTC m=+36.248175497" Dec 18 11:05:00.065633 systemd[1]: Created slice kubepods-besteffort-podf07f5925_2bfb_4163_b634_d7861acf227f.slice - libcontainer container kubepods-besteffort-podf07f5925_2bfb_4163_b634_d7861acf227f.slice. Dec 18 11:05:00.192586 kubelet[2695]: I1218 11:05:00.192526 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pp7p\" (UniqueName: \"kubernetes.io/projected/f07f5925-2bfb-4163-b634-d7861acf227f-kube-api-access-8pp7p\") pod \"whisker-79d6c7bc4f-mf4nr\" (UID: \"f07f5925-2bfb-4163-b634-d7861acf227f\") " pod="calico-system/whisker-79d6c7bc4f-mf4nr" Dec 18 11:05:00.192586 kubelet[2695]: I1218 11:05:00.192577 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f07f5925-2bfb-4163-b634-d7861acf227f-whisker-ca-bundle\") pod \"whisker-79d6c7bc4f-mf4nr\" (UID: \"f07f5925-2bfb-4163-b634-d7861acf227f\") " pod="calico-system/whisker-79d6c7bc4f-mf4nr" Dec 18 11:05:00.192792 kubelet[2695]: I1218 11:05:00.192599 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f07f5925-2bfb-4163-b634-d7861acf227f-whisker-backend-key-pair\") pod \"whisker-79d6c7bc4f-mf4nr\" (UID: \"f07f5925-2bfb-4163-b634-d7861acf227f\") " pod="calico-system/whisker-79d6c7bc4f-mf4nr" Dec 18 11:05:00.369535 containerd[1520]: time="2025-12-18T11:05:00.369414200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d6c7bc4f-mf4nr,Uid:f07f5925-2bfb-4163-b634-d7861acf227f,Namespace:calico-system,Attempt:0,}" Dec 18 11:05:00.546771 systemd-networkd[1350]: calia99bf3f90cc: Link UP Dec 18 11:05:00.546965 systemd-networkd[1350]: calia99bf3f90cc: Gained carrier Dec 18 11:05:00.566319 containerd[1520]: 2025-12-18 11:05:00.391 [INFO][3873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 18 11:05:00.566319 containerd[1520]: 2025-12-18 11:05:00.421 [INFO][3873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0 whisker-79d6c7bc4f- calico-system f07f5925-2bfb-4163-b634-d7861acf227f 946 0 2025-12-18 11:05:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79d6c7bc4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79d6c7bc4f-mf4nr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia99bf3f90cc [] [] }} ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-" Dec 18 11:05:00.566319 containerd[1520]: 2025-12-18 11:05:00.421 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.566319 containerd[1520]: 2025-12-18 11:05:00.477 [INFO][3887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" HandleID="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Workload="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.477 [INFO][3887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" HandleID="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Workload="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79d6c7bc4f-mf4nr", "timestamp":"2025-12-18 11:05:00.477321488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.477 [INFO][3887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.477 [INFO][3887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.477 [INFO][3887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.489 [INFO][3887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" host="localhost" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.502 [INFO][3887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.509 [INFO][3887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.511 [INFO][3887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.514 [INFO][3887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:00.566529 containerd[1520]: 2025-12-18 11:05:00.514 [INFO][3887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" host="localhost" Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.516 [INFO][3887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.520 [INFO][3887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" host="localhost" Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.527 [INFO][3887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" host="localhost" Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.527 [INFO][3887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" host="localhost" Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.527 [INFO][3887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:00.567404 containerd[1520]: 2025-12-18 11:05:00.527 [INFO][3887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" HandleID="k8s-pod-network.a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Workload="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.567548 containerd[1520]: 2025-12-18 11:05:00.533 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0", GenerateName:"whisker-79d6c7bc4f-", Namespace:"calico-system", SelfLink:"", UID:"f07f5925-2bfb-4163-b634-d7861acf227f", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 5, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d6c7bc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79d6c7bc4f-mf4nr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia99bf3f90cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:00.567548 containerd[1520]: 2025-12-18 11:05:00.533 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.567621 containerd[1520]: 2025-12-18 11:05:00.533 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia99bf3f90cc ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.567621 containerd[1520]: 2025-12-18 11:05:00.546 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.567662 containerd[1520]: 2025-12-18 11:05:00.546 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0", GenerateName:"whisker-79d6c7bc4f-", Namespace:"calico-system", SelfLink:"", UID:"f07f5925-2bfb-4163-b634-d7861acf227f", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 5, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d6c7bc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd", Pod:"whisker-79d6c7bc4f-mf4nr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia99bf3f90cc", MAC:"ce:bc:bc:5d:81:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:00.567712 containerd[1520]: 2025-12-18 11:05:00.561 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" Namespace="calico-system" Pod="whisker-79d6c7bc4f-mf4nr" WorkloadEndpoint="localhost-k8s-whisker--79d6c7bc4f--mf4nr-eth0" Dec 18 11:05:00.608316 containerd[1520]: time="2025-12-18T11:05:00.608265828Z" level=info msg="connecting to shim a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd" address="unix:///run/containerd/s/8a1ce7fc31c127bdef4bcc9c9f1c56192e04095c6d775130dab1388f68c0e173" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:00.642938 systemd[1]: Started cri-containerd-a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd.scope - libcontainer container a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd. Dec 18 11:05:00.662000 audit: BPF prog-id=151 op=LOAD Dec 18 11:05:00.662000 audit: BPF prog-id=152 op=LOAD Dec 18 11:05:00.662000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=152 op=UNLOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=153 op=LOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=154 op=LOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=154 op=UNLOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=153 op=UNLOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.664000 audit: BPF prog-id=155 op=LOAD Dec 18 11:05:00.664000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4005 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:00.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136643633313863353166663234373765323731313635323761666161 Dec 18 11:05:00.666408 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:00.726851 containerd[1520]: time="2025-12-18T11:05:00.726796842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d6c7bc4f-mf4nr,Uid:f07f5925-2bfb-4163-b634-d7861acf227f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6d6318c51ff2477e27116527afaa16feb907d879049128f4b20cf7e6b87b1bd\"" Dec 18 11:05:00.729850 containerd[1520]: time="2025-12-18T11:05:00.729815763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 18 11:05:00.948844 containerd[1520]: time="2025-12-18T11:05:00.948090782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:00.959156 containerd[1520]: time="2025-12-18T11:05:00.959104907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 18 11:05:00.959254 containerd[1520]: time="2025-12-18T11:05:00.959203267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:00.959463 kubelet[2695]: E1218 11:05:00.959426 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:00.960179 kubelet[2695]: E1218 11:05:00.959945 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:00.964012 kubelet[2695]: E1218 11:05:00.963959 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9349b312f37c4e9f929d49397049c45a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:00.966681 containerd[1520]: time="2025-12-18T11:05:00.966058390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 18 11:05:01.006702 kubelet[2695]: I1218 11:05:01.006668 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 18 11:05:01.007094 kubelet[2695]: E1218 11:05:01.007077 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:01.186445 containerd[1520]: time="2025-12-18T11:05:01.186402805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:01.187400 containerd[1520]: time="2025-12-18T11:05:01.187359325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 18 11:05:01.187488 containerd[1520]: time="2025-12-18T11:05:01.187441685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:01.187617 kubelet[2695]: E1218 11:05:01.187580 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:01.187660 kubelet[2695]: E1218 11:05:01.187631 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:01.187812 kubelet[2695]: E1218 11:05:01.187771 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:01.189301 kubelet[2695]: E1218 11:05:01.189261 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d6c7bc4f-mf4nr" podUID="f07f5925-2bfb-4163-b634-d7861acf227f" Dec 18 11:05:01.854895 systemd-networkd[1350]: calia99bf3f90cc: Gained IPv6LL Dec 18 11:05:01.858245 kubelet[2695]: I1218 11:05:01.858208 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccde4425-b596-4719-9d97-b9f665c9d1bc" path="/var/lib/kubelet/pods/ccde4425-b596-4719-9d97-b9f665c9d1bc/volumes" Dec 18 11:05:02.005667 kubelet[2695]: E1218 11:05:02.005622 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d6c7bc4f-mf4nr" podUID="f07f5925-2bfb-4163-b634-d7861acf227f" Dec 18 11:05:02.068000 audit[4073]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:02.068000 audit[4073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffede373b0 a2=0 a3=1 items=0 ppid=2850 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:02.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:02.084000 audit[4073]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:02.084000 audit[4073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffede373b0 a2=0 a3=1 items=0 ppid=2850 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:02.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:02.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-4-10.0.0.27:22-10.0.0.1:34960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:02.373378 systemd[1]: Started sshd@7-4-10.0.0.27:22-10.0.0.1:34960.service - OpenSSH per-connection server daemon (10.0.0.1:34960). Dec 18 11:05:02.445000 audit[4075]: AUDIT1101 pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.446116 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 34960 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:02.446000 audit[4075]: AUDIT1103 pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.446000 audit[4075]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7ad19a0 a2=3 a3=0 items=0 ppid=1 pid=4075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:02.446000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:02.447933 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:02.451895 systemd-logind[1499]: New session '9' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:02.464965 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 18 11:05:02.466000 audit[4075]: AUDIT1105 pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.468000 audit[4079]: AUDIT1103 pid=4079 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.649477 sshd[4079]: Connection closed by 10.0.0.1 port 34960 Dec 18 11:05:02.649740 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:02.649000 audit[4075]: AUDIT1106 pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.649000 audit[4075]: AUDIT1104 pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:02.653363 systemd[1]: sshd@7-4-10.0.0.27:22-10.0.0.1:34960.service: Deactivated successfully. Dec 18 11:05:02.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-4-10.0.0.27:22-10.0.0.1:34960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:02.655081 systemd[1]: session-9.scope: Deactivated successfully. Dec 18 11:05:02.657253 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. Dec 18 11:05:02.658151 systemd-logind[1499]: Removed session 9. Dec 18 11:05:03.013781 kubelet[2695]: I1218 11:05:03.013737 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 18 11:05:03.014379 kubelet[2695]: E1218 11:05:03.014253 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:03.047000 audit[4118]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:03.047000 audit[4118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff2f69f90 a2=0 a3=1 items=0 ppid=2850 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:03.059000 audit[4118]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:03.059000 audit[4118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff2f69f90 a2=0 a3=1 items=0 ppid=2850 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:03.919000 audit: BPF prog-id=156 op=LOAD Dec 18 11:05:03.919000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff0e9a28 a2=98 a3=ffffff0e9a18 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.919000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.919000 audit: BPF prog-id=156 op=UNLOAD Dec 18 11:05:03.919000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffff0e99f8 a3=0 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.919000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.919000 audit: BPF prog-id=157 op=LOAD Dec 18 11:05:03.919000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff0e98d8 a2=74 a3=95 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.919000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.919000 audit: BPF prog-id=157 op=UNLOAD Dec 18 11:05:03.919000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.919000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.919000 audit: BPF prog-id=158 op=LOAD Dec 18 11:05:03.919000 audit[4162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffff0e9908 a2=40 a3=ffffff0e9938 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.919000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.920000 audit: BPF prog-id=158 op=UNLOAD Dec 18 11:05:03.920000 audit[4162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffff0e9938 items=0 ppid=4129 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 18 11:05:03.921000 audit: BPF prog-id=159 op=LOAD Dec 18 11:05:03.921000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdd4bb588 a2=98 a3=ffffdd4bb578 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.921000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:03.922000 audit: BPF prog-id=159 op=UNLOAD Dec 18 11:05:03.922000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdd4bb558 a3=0 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:03.922000 audit: BPF prog-id=160 op=LOAD Dec 18 11:05:03.922000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd4bb218 a2=74 a3=95 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:03.922000 audit: BPF prog-id=160 op=UNLOAD Dec 18 11:05:03.922000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:03.922000 audit: BPF prog-id=161 op=LOAD Dec 18 11:05:03.922000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd4bb278 a2=94 a3=2 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:03.922000 audit: BPF prog-id=161 op=UNLOAD Dec 18 11:05:03.922000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:03.922000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.007172 kubelet[2695]: E1218 11:05:04.007135 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:04.033000 audit: BPF prog-id=162 op=LOAD Dec 18 11:05:04.035293 kernel: kauditd_printk_skb: 86 callbacks suppressed Dec 18 11:05:04.035348 kernel: audit: type=1334 audit(1766055904.033:555): prog-id=162 op=LOAD Dec 18 11:05:04.033000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd4bb238 a2=40 a3=ffffdd4bb268 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.039450 kernel: audit: type=1300 audit(1766055904.033:555): arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdd4bb238 a2=40 a3=ffffdd4bb268 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.033000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.041675 kernel: audit: type=1327 audit(1766055904.033:555): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.033000 audit: BPF prog-id=162 op=UNLOAD Dec 18 11:05:04.041767 kernel: audit: type=1334 audit(1766055904.033:556): prog-id=162 op=UNLOAD Dec 18 11:05:04.033000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdd4bb268 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.042592 kernel: audit: type=1300 audit(1766055904.033:556): arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdd4bb268 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.033000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.047106 kernel: audit: type=1327 audit(1766055904.033:556): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.046000 audit: BPF prog-id=163 op=LOAD Dec 18 11:05:04.048251 kernel: audit: type=1334 audit(1766055904.046:557): prog-id=163 op=LOAD Dec 18 11:05:04.046000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd4bb248 a2=94 a3=4 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.048361 kernel: audit: type=1300 audit(1766055904.046:557): arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd4bb248 a2=94 a3=4 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.052742 kernel: audit: type=1327 audit(1766055904.046:557): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.046000 audit: BPF prog-id=163 op=UNLOAD Dec 18 11:05:04.052814 kernel: audit: type=1334 audit(1766055904.046:558): prog-id=163 op=UNLOAD Dec 18 11:05:04.046000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.047000 audit: BPF prog-id=164 op=LOAD Dec 18 11:05:04.047000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdd4bb088 a2=94 a3=5 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.052000 audit: BPF prog-id=164 op=UNLOAD Dec 18 11:05:04.052000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.052000 audit: BPF prog-id=165 op=LOAD Dec 18 11:05:04.052000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd4bb2b8 a2=94 a3=6 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.052000 audit: BPF prog-id=165 op=UNLOAD Dec 18 11:05:04.052000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.053000 audit: BPF prog-id=166 op=LOAD Dec 18 11:05:04.053000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdd4baa88 a2=94 a3=83 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.053000 audit: BPF prog-id=167 op=LOAD Dec 18 11:05:04.053000 audit[4163]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdd4ba848 a2=94 a3=2 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.053000 audit: BPF prog-id=167 op=UNLOAD Dec 18 11:05:04.053000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.053000 audit: BPF prog-id=166 op=UNLOAD Dec 18 11:05:04.053000 audit[4163]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=372f2620 a3=372e5b00 items=0 ppid=4129 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 18 11:05:04.063000 audit: BPF prog-id=168 op=LOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff95779a8 a2=98 a3=fffff9577998 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.063000 audit: BPF prog-id=168 op=UNLOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff9577978 a3=0 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.063000 audit: BPF prog-id=169 op=LOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9577858 a2=74 a3=95 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.063000 audit: BPF prog-id=169 op=UNLOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.063000 audit: BPF prog-id=170 op=LOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9577888 a2=40 a3=fffff95778b8 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.063000 audit: BPF prog-id=170 op=UNLOAD Dec 18 11:05:04.063000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff95778b8 items=0 ppid=4129 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 18 11:05:04.131562 systemd-networkd[1350]: vxlan.calico: Link UP Dec 18 11:05:04.131568 systemd-networkd[1350]: vxlan.calico: Gained carrier Dec 18 11:05:04.146000 audit: BPF prog-id=171 op=LOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff76dd5c8 a2=98 a3=fffff76dd5b8 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=171 op=UNLOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff76dd598 a3=0 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=172 op=LOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff76dd2a8 a2=74 a3=95 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=172 op=UNLOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=173 op=LOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff76dd308 a2=94 a3=2 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=173 op=UNLOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=174 op=LOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff76dd188 a2=40 a3=fffff76dd1b8 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=174 op=UNLOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff76dd1b8 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=175 op=LOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff76dd2d8 a2=94 a3=b7 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.146000 audit: BPF prog-id=175 op=UNLOAD Dec 18 11:05:04.146000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.146000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.147000 audit: BPF prog-id=176 op=LOAD Dec 18 11:05:04.147000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff76dc988 a2=94 a3=2 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.147000 audit: BPF prog-id=176 op=UNLOAD Dec 18 11:05:04.147000 audit[4211]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.147000 audit: BPF prog-id=177 op=LOAD Dec 18 11:05:04.147000 audit[4211]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff76dcb18 a2=94 a3=30 items=0 ppid=4129 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 18 11:05:04.151000 audit: BPF prog-id=178 op=LOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd600bc78 a2=98 a3=ffffd600bc68 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.151000 audit: BPF prog-id=178 op=UNLOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd600bc48 a3=0 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.151000 audit: BPF prog-id=179 op=LOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd600b908 a2=74 a3=95 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.151000 audit: BPF prog-id=179 op=UNLOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.151000 audit: BPF prog-id=180 op=LOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd600b968 a2=94 a3=2 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.151000 audit: BPF prog-id=180 op=UNLOAD Dec 18 11:05:04.151000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.250000 audit: BPF prog-id=181 op=LOAD Dec 18 11:05:04.250000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd600b928 a2=40 a3=ffffd600b958 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.250000 audit: BPF prog-id=181 op=UNLOAD Dec 18 11:05:04.250000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd600b958 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=182 op=LOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd600b938 a2=94 a3=4 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=182 op=UNLOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=183 op=LOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd600b778 a2=94 a3=5 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=183 op=UNLOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=184 op=LOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd600b9a8 a2=94 a3=6 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.260000 audit: BPF prog-id=184 op=UNLOAD Dec 18 11:05:04.260000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.261000 audit: BPF prog-id=185 op=LOAD Dec 18 11:05:04.261000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd600b178 a2=94 a3=83 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.261000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.261000 audit: BPF prog-id=186 op=LOAD Dec 18 11:05:04.261000 audit[4215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd600af38 a2=94 a3=2 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.261000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.261000 audit: BPF prog-id=186 op=UNLOAD Dec 18 11:05:04.261000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.261000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.261000 audit: BPF prog-id=185 op=UNLOAD Dec 18 11:05:04.261000 audit[4215]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3dca1620 a3=3dc94b00 items=0 ppid=4129 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.261000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 18 11:05:04.278000 audit: BPF prog-id=177 op=UNLOAD Dec 18 11:05:04.278000 audit[4129]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4001048480 a2=0 a3=0 items=0 ppid=3896 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.278000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 18 11:05:04.323000 audit[4244]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:04.323000 audit[4244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffc8f93d0 a2=0 a3=ffff87bc0fa8 items=0 ppid=4129 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:04.324000 audit[4245]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:04.324000 audit[4245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffffc893fe0 a2=0 a3=ffff8b71dfa8 items=0 ppid=4129 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:04.330000 audit[4243]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:04.330000 audit[4243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcea32530 a2=0 a3=ffffa38abfa8 items=0 ppid=4129 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.330000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:04.334000 audit[4247]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:04.334000 audit[4247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc7c5ad30 a2=0 a3=ffff8fabafa8 items=0 ppid=4129 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:04.334000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:05.630872 systemd-networkd[1350]: vxlan.calico: Gained IPv6LL Dec 18 11:05:05.857401 kubelet[2695]: E1218 11:05:05.857307 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:05.858118 containerd[1520]: time="2025-12-18T11:05:05.857671195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snrcr,Uid:762d304a-7fbc-4a89-8956-5d4f6addfbf8,Namespace:kube-system,Attempt:0,}" Dec 18 11:05:05.964077 systemd-networkd[1350]: cali078755b3f05: Link UP Dec 18 11:05:05.964365 systemd-networkd[1350]: cali078755b3f05: Gained carrier Dec 18 11:05:05.977555 containerd[1520]: 2025-12-18 11:05:05.900 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--snrcr-eth0 coredns-668d6bf9bc- kube-system 762d304a-7fbc-4a89-8956-5d4f6addfbf8 884 0 2025-12-18 11:04:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-snrcr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali078755b3f05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-" Dec 18 11:05:05.977555 containerd[1520]: 2025-12-18 11:05:05.901 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.977555 containerd[1520]: 2025-12-18 11:05:05.925 [INFO][4274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" HandleID="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Workload="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.926 [INFO][4274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" HandleID="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Workload="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ae080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-snrcr", "timestamp":"2025-12-18 11:05:05.925813457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.926 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.926 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.926 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.935 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" host="localhost" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.941 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.945 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.947 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.949 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:05.977757 containerd[1520]: 2025-12-18 11:05:05.949 [INFO][4274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" host="localhost" Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.950 [INFO][4274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.954 [INFO][4274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" host="localhost" Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.959 [INFO][4274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" host="localhost" Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.959 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" host="localhost" Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.959 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:05.977950 containerd[1520]: 2025-12-18 11:05:05.959 [INFO][4274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" HandleID="k8s-pod-network.f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Workload="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.978050 containerd[1520]: 2025-12-18 11:05:05.962 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--snrcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"762d304a-7fbc-4a89-8956-5d4f6addfbf8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-snrcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali078755b3f05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:05.978106 containerd[1520]: 2025-12-18 11:05:05.962 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.978106 containerd[1520]: 2025-12-18 11:05:05.962 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali078755b3f05 ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.978106 containerd[1520]: 2025-12-18 11:05:05.964 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.978179 containerd[1520]: 2025-12-18 11:05:05.964 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--snrcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"762d304a-7fbc-4a89-8956-5d4f6addfbf8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a", Pod:"coredns-668d6bf9bc-snrcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali078755b3f05", MAC:"32:ad:98:89:74:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:05.978179 containerd[1520]: 2025-12-18 11:05:05.973 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-snrcr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--snrcr-eth0" Dec 18 11:05:05.989000 audit[4291]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:05.989000 audit[4291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffe95838e0 a2=0 a3=ffff8c8d0fa8 items=0 ppid=4129 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:05.989000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:05.999039 containerd[1520]: time="2025-12-18T11:05:05.998600081Z" level=info msg="connecting to shim f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a" address="unix:///run/containerd/s/4528a7e4b9498df679b435c664894639ed4f6a3624f572f412062bd0471be8c0" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:06.023899 systemd[1]: Started cri-containerd-f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a.scope - libcontainer container f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a. Dec 18 11:05:06.034000 audit: BPF prog-id=187 op=LOAD Dec 18 11:05:06.034000 audit: BPF prog-id=188 op=LOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=188 op=UNLOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=189 op=LOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=190 op=LOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=190 op=UNLOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=189 op=UNLOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.034000 audit: BPF prog-id=191 op=LOAD Dec 18 11:05:06.034000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4300 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613138623234646230646239616538386637613130666563323638 Dec 18 11:05:06.036232 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:06.055733 containerd[1520]: time="2025-12-18T11:05:06.055686899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snrcr,Uid:762d304a-7fbc-4a89-8956-5d4f6addfbf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a\"" Dec 18 11:05:06.057521 kubelet[2695]: E1218 11:05:06.057497 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:06.059598 containerd[1520]: time="2025-12-18T11:05:06.059563700Z" level=info msg="CreateContainer within sandbox \"f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 18 11:05:06.074305 containerd[1520]: time="2025-12-18T11:05:06.073660784Z" level=info msg="Container cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:05:06.079016 containerd[1520]: time="2025-12-18T11:05:06.078972986Z" level=info msg="CreateContainer within sandbox \"f9a18b24db0db9ae88f7a10fec268cbe671a65ddc51981b8c3c13d60b8deec0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9\"" Dec 18 11:05:06.080569 containerd[1520]: time="2025-12-18T11:05:06.080544226Z" level=info msg="StartContainer for \"cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9\"" Dec 18 11:05:06.081879 containerd[1520]: time="2025-12-18T11:05:06.081730147Z" level=info msg="connecting to shim cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9" address="unix:///run/containerd/s/4528a7e4b9498df679b435c664894639ed4f6a3624f572f412062bd0471be8c0" protocol=ttrpc version=3 Dec 18 11:05:06.099985 systemd[1]: Started cri-containerd-cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9.scope - libcontainer container cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9. Dec 18 11:05:06.111000 audit: BPF prog-id=192 op=LOAD Dec 18 11:05:06.111000 audit: BPF prog-id=193 op=LOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=193 op=UNLOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=194 op=LOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=195 op=LOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=195 op=UNLOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=194 op=UNLOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.111000 audit: BPF prog-id=196 op=LOAD Dec 18 11:05:06.111000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4300 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613537343238663534353730623937633665613062326334623934 Dec 18 11:05:06.138154 containerd[1520]: time="2025-12-18T11:05:06.138085444Z" level=info msg="StartContainer for \"cba57428f54570b97c6ea0b2c4b94e52a3a5e2ac3384ab31d97b5f8997ef70b9\" returns successfully" Dec 18 11:05:06.855858 kubelet[2695]: E1218 11:05:06.855814 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:06.856266 containerd[1520]: time="2025-12-18T11:05:06.856220025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4c7n,Uid:52e2dbeb-b419-4df5-843b-cc3d0bbd23f2,Namespace:kube-system,Attempt:0,}" Dec 18 11:05:06.856588 containerd[1520]: time="2025-12-18T11:05:06.856560905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-779d74d86-dfx2b,Uid:91434238-f80e-42f1-bb48-55395c65ff33,Namespace:calico-system,Attempt:0,}" Dec 18 11:05:06.981404 systemd-networkd[1350]: cali8021d7692ec: Link UP Dec 18 11:05:06.982427 systemd-networkd[1350]: cali8021d7692ec: Gained carrier Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.907 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0 calico-kube-controllers-779d74d86- calico-system 91434238-f80e-42f1-bb48-55395c65ff33 872 0 2025-12-18 11:04:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:779d74d86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-779d74d86-dfx2b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8021d7692ec [] [] }} ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.907 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.933 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" HandleID="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Workload="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.933 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" HandleID="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Workload="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034bd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-779d74d86-dfx2b", "timestamp":"2025-12-18 11:05:06.933012689 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.933 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.933 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.933 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.948 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.954 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.959 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.961 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.963 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.963 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.965 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64 Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.970 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" host="localhost" Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:06.997395 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" HandleID="k8s-pod-network.ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Workload="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.978 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0", GenerateName:"calico-kube-controllers-779d74d86-", Namespace:"calico-system", SelfLink:"", UID:"91434238-f80e-42f1-bb48-55395c65ff33", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"779d74d86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-779d74d86-dfx2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8021d7692ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.979 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.979 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8021d7692ec ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.982 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.983 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0", GenerateName:"calico-kube-controllers-779d74d86-", Namespace:"calico-system", SelfLink:"", UID:"91434238-f80e-42f1-bb48-55395c65ff33", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"779d74d86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64", Pod:"calico-kube-controllers-779d74d86-dfx2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8021d7692ec", MAC:"d2:63:97:f7:bc:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:06.998339 containerd[1520]: 2025-12-18 11:05:06.994 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" Namespace="calico-system" Pod="calico-kube-controllers-779d74d86-dfx2b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--779d74d86--dfx2b-eth0" Dec 18 11:05:07.007000 audit[4426]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4426 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:07.007000 audit[4426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffdf9dc5b0 a2=0 a3=ffffb1811fa8 items=0 ppid=4129 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.007000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:07.016704 kubelet[2695]: E1218 11:05:07.016076 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:07.027895 containerd[1520]: time="2025-12-18T11:05:07.027856117Z" level=info msg="connecting to shim ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64" address="unix:///run/containerd/s/146043c22f264a32c0b9c292a83d7497ecb6d335053a5438820e671d860c19e2" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:07.035113 kubelet[2695]: I1218 11:05:07.035049 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-snrcr" podStartSLOduration=37.035021039 podStartE2EDuration="37.035021039s" podCreationTimestamp="2025-12-18 11:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:05:07.033014759 +0000 UTC m=+43.265536617" watchObservedRunningTime="2025-12-18 11:05:07.035021039 +0000 UTC m=+43.267542897" Dec 18 11:05:07.105979 systemd[1]: Started cri-containerd-ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64.scope - libcontainer container ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64. Dec 18 11:05:07.109000 audit[4460]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:07.109000 audit[4460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7a460c0 a2=0 a3=1 items=0 ppid=2850 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:07.115000 audit[4460]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:07.115000 audit[4460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc7a460c0 a2=0 a3=1 items=0 ppid=2850 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:07.124535 systemd-networkd[1350]: calie31ebde539c: Link UP Dec 18 11:05:07.125433 systemd-networkd[1350]: calie31ebde539c: Gained carrier Dec 18 11:05:07.127000 audit: BPF prog-id=197 op=LOAD Dec 18 11:05:07.127000 audit: BPF prog-id=198 op=LOAD Dec 18 11:05:07.127000 audit[4446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.127000 audit: BPF prog-id=198 op=UNLOAD Dec 18 11:05:07.127000 audit[4446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.128000 audit: BPF prog-id=199 op=LOAD Dec 18 11:05:07.128000 audit[4446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.128000 audit: BPF prog-id=200 op=LOAD Dec 18 11:05:07.128000 audit[4446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.128000 audit: BPF prog-id=200 op=UNLOAD Dec 18 11:05:07.128000 audit[4446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.128000 audit: BPF prog-id=199 op=UNLOAD Dec 18 11:05:07.128000 audit[4446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.128000 audit: BPF prog-id=201 op=LOAD Dec 18 11:05:07.128000 audit[4446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4436 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666326236336638303236323464623661323932663861373462666562 Dec 18 11:05:07.133367 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:07.145000 audit[4471]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:07.145000 audit[4471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcf881bc0 a2=0 a3=1 items=0 ppid=2850 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.905 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0 coredns-668d6bf9bc- kube-system 52e2dbeb-b419-4df5-843b-cc3d0bbd23f2 882 0 2025-12-18 11:04:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-l4c7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie31ebde539c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.906 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.938 [INFO][4402] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" HandleID="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Workload="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.938 [INFO][4402] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" HandleID="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Workload="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-l4c7n", "timestamp":"2025-12-18 11:05:06.93830937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.938 [INFO][4402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:06.975 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.057 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.074 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.085 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.087 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.092 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.092 [INFO][4402] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.095 [INFO][4402] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.101 [INFO][4402] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.115 [INFO][4402] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.115 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" host="localhost" Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.115 [INFO][4402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:07.148014 containerd[1520]: 2025-12-18 11:05:07.115 [INFO][4402] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" HandleID="k8s-pod-network.39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Workload="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.121 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"52e2dbeb-b419-4df5-843b-cc3d0bbd23f2", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-l4c7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie31ebde539c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.121 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.121 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie31ebde539c ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.126 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.126 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"52e2dbeb-b419-4df5-843b-cc3d0bbd23f2", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f", Pod:"coredns-668d6bf9bc-l4c7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie31ebde539c", MAC:"82:0b:f1:eb:83:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:07.148607 containerd[1520]: 2025-12-18 11:05:07.140 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" Namespace="kube-system" Pod="coredns-668d6bf9bc-l4c7n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l4c7n-eth0" Dec 18 11:05:07.150000 audit[4471]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:07.150000 audit[4471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffcf881bc0 a2=0 a3=1 items=0 ppid=2850 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:07.161000 audit[4489]: NETFILTER_CFG table=filter:133 family=2 entries=40 op=nft_register_chain pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:07.161000 audit[4489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20344 a0=3 a1=ffffe3139bc0 a2=0 a3=ffffa52a8fa8 items=0 ppid=4129 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.161000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:07.169108 containerd[1520]: time="2025-12-18T11:05:07.169019758Z" level=info msg="connecting to shim 39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f" address="unix:///run/containerd/s/40670d4e08f4347dac9869bad01c205d9788fcb9183e092d50b217f467c7a00c" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:07.170083 containerd[1520]: time="2025-12-18T11:05:07.170057238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-779d74d86-dfx2b,Uid:91434238-f80e-42f1-bb48-55395c65ff33,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff2b63f802624db6a292f8a74bfebee0e1f619f4da9a1584cdfa0bb174250e64\"" Dec 18 11:05:07.173194 containerd[1520]: time="2025-12-18T11:05:07.173163919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 18 11:05:07.195938 systemd[1]: Started cri-containerd-39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f.scope - libcontainer container 39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f. Dec 18 11:05:07.205000 audit: BPF prog-id=202 op=LOAD Dec 18 11:05:07.206000 audit: BPF prog-id=203 op=LOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=203 op=UNLOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=204 op=LOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=205 op=LOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=205 op=UNLOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=204 op=UNLOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.206000 audit: BPF prog-id=206 op=LOAD Dec 18 11:05:07.206000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4498 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333832656461303938353463306336613331626363393734666666 Dec 18 11:05:07.207810 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:07.231327 containerd[1520]: time="2025-12-18T11:05:07.231284976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4c7n,Uid:52e2dbeb-b419-4df5-843b-cc3d0bbd23f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f\"" Dec 18 11:05:07.232456 kubelet[2695]: E1218 11:05:07.232420 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:07.235942 containerd[1520]: time="2025-12-18T11:05:07.235592217Z" level=info msg="CreateContainer within sandbox \"39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 18 11:05:07.245102 containerd[1520]: time="2025-12-18T11:05:07.245051420Z" level=info msg="Container 8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4: CDI devices from CRI Config.CDIDevices: []" Dec 18 11:05:07.252404 containerd[1520]: time="2025-12-18T11:05:07.252369342Z" level=info msg="CreateContainer within sandbox \"39382eda09854c0c6a31bcc974fff5043cc63ccbfa389bed6b24dd9217a7102f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4\"" Dec 18 11:05:07.252900 containerd[1520]: time="2025-12-18T11:05:07.252855262Z" level=info msg="StartContainer for \"8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4\"" Dec 18 11:05:07.253808 containerd[1520]: time="2025-12-18T11:05:07.253772823Z" level=info msg="connecting to shim 8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4" address="unix:///run/containerd/s/40670d4e08f4347dac9869bad01c205d9788fcb9183e092d50b217f467c7a00c" protocol=ttrpc version=3 Dec 18 11:05:07.283948 systemd[1]: Started cri-containerd-8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4.scope - libcontainer container 8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4. Dec 18 11:05:07.294822 systemd-networkd[1350]: cali078755b3f05: Gained IPv6LL Dec 18 11:05:07.295000 audit: BPF prog-id=207 op=LOAD Dec 18 11:05:07.296000 audit: BPF prog-id=208 op=LOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=208 op=UNLOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=209 op=LOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=210 op=LOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=210 op=UNLOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=209 op=UNLOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.296000 audit: BPF prog-id=211 op=LOAD Dec 18 11:05:07.296000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4498 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363862366139326663393063316135666266306263363663623761 Dec 18 11:05:07.312097 containerd[1520]: time="2025-12-18T11:05:07.312053599Z" level=info msg="StartContainer for \"8e68b6a92fc90c1a5fbf0bc66cb7a197a961d7cc8ecc0f41c0a348fbe1c13dd4\" returns successfully" Dec 18 11:05:07.378959 containerd[1520]: time="2025-12-18T11:05:07.378819859Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:07.380862 containerd[1520]: time="2025-12-18T11:05:07.380797939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:07.380922 containerd[1520]: time="2025-12-18T11:05:07.380799099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 18 11:05:07.381125 kubelet[2695]: E1218 11:05:07.381059 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:07.381172 kubelet[2695]: E1218 11:05:07.381129 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:07.381579 kubelet[2695]: E1218 11:05:07.381355 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2r7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-779d74d86-dfx2b_calico-system(91434238-f80e-42f1-bb48-55395c65ff33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:07.382831 kubelet[2695]: E1218 11:05:07.382758 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:07.661285 systemd[1]: Started sshd@8-5-10.0.0.27:22-10.0.0.1:34968.service - OpenSSH per-connection server daemon (10.0.0.1:34968). Dec 18 11:05:07.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-5-10.0.0.27:22-10.0.0.1:34968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:07.723000 audit[4569]: AUDIT1101 pid=4569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.724617 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 34968 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:07.724000 audit[4569]: AUDIT1103 pid=4569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.724000 audit[4569]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee662410 a2=3 a3=0 items=0 ppid=1 pid=4569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.724000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:07.726314 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:07.730083 systemd-logind[1499]: New session '10' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:07.743921 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 18 11:05:07.745000 audit[4569]: AUDIT1105 pid=4569 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.747000 audit[4573]: AUDIT1103 pid=4573 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.840792 sshd[4573]: Connection closed by 10.0.0.1 port 34968 Dec 18 11:05:07.841128 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:07.840000 audit[4569]: AUDIT1106 pid=4569 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.841000 audit[4569]: AUDIT1104 pid=4569 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:07.844829 systemd[1]: sshd@8-5-10.0.0.27:22-10.0.0.1:34968.service: Deactivated successfully. Dec 18 11:05:07.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-5-10.0.0.27:22-10.0.0.1:34968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:07.846672 systemd[1]: session-10.scope: Deactivated successfully. Dec 18 11:05:07.848085 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. Dec 18 11:05:07.848982 systemd-logind[1499]: Removed session 10. Dec 18 11:05:07.856639 containerd[1520]: time="2025-12-18T11:05:07.856593597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb4cdc46-f2s62,Uid:68f46b97-44ea-43d3-8c4b-06c74ab4d137,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:05:07.856824 containerd[1520]: time="2025-12-18T11:05:07.856605997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdf8q,Uid:de13faa1-4005-4e4c-bebe-9b34acc642ce,Namespace:calico-system,Attempt:0,}" Dec 18 11:05:07.967368 systemd-networkd[1350]: calic0a615a47ed: Link UP Dec 18 11:05:07.967825 systemd-networkd[1350]: calic0a615a47ed: Gained carrier Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.901 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0 calico-apiserver-54fb4cdc46- calico-apiserver 68f46b97-44ea-43d3-8c4b-06c74ab4d137 880 0 2025-12-18 11:04:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54fb4cdc46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54fb4cdc46-f2s62 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic0a615a47ed [] [] }} ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.901 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.927 [INFO][4617] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" HandleID="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Workload="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.928 [INFO][4617] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" HandleID="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Workload="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54fb4cdc46-f2s62", "timestamp":"2025-12-18 11:05:07.927895737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.928 [INFO][4617] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.928 [INFO][4617] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.928 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.936 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.941 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.946 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.948 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.950 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.950 [INFO][4617] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.952 [INFO][4617] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.956 [INFO][4617] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4617] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" host="localhost" Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4617] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:07.982903 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4617] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" HandleID="k8s-pod-network.29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Workload="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.965 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0", GenerateName:"calico-apiserver-54fb4cdc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"68f46b97-44ea-43d3-8c4b-06c74ab4d137", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fb4cdc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54fb4cdc46-f2s62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0a615a47ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.965 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.965 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0a615a47ed ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.967 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.967 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0", GenerateName:"calico-apiserver-54fb4cdc46-", Namespace:"calico-apiserver", SelfLink:"", UID:"68f46b97-44ea-43d3-8c4b-06c74ab4d137", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fb4cdc46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe", Pod:"calico-apiserver-54fb4cdc46-f2s62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0a615a47ed", MAC:"02:d6:79:d8:f0:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:07.983425 containerd[1520]: 2025-12-18 11:05:07.980 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" Namespace="calico-apiserver" Pod="calico-apiserver-54fb4cdc46-f2s62" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fb4cdc46--f2s62-eth0" Dec 18 11:05:07.991000 audit[4641]: NETFILTER_CFG table=filter:134 family=2 entries=62 op=nft_register_chain pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:07.991000 audit[4641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffcaf069e0 a2=0 a3=ffff7fda9fa8 items=0 ppid=4129 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:07.991000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:08.001339 containerd[1520]: time="2025-12-18T11:05:08.001301078Z" level=info msg="connecting to shim 29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe" address="unix:///run/containerd/s/4bdf3a29e4693ee4be91a746c9963d2f3cc0e9d805476753340d092f042d4799" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:08.019269 kubelet[2695]: E1218 11:05:08.019230 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:08.028413 kubelet[2695]: E1218 11:05:08.028167 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:08.028413 kubelet[2695]: E1218 11:05:08.028311 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:08.028939 systemd[1]: Started cri-containerd-29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe.scope - libcontainer container 29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe. Dec 18 11:05:08.051772 kubelet[2695]: I1218 11:05:08.051287 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l4c7n" podStartSLOduration=38.051271132 podStartE2EDuration="38.051271132s" podCreationTimestamp="2025-12-18 11:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-18 11:05:08.050906612 +0000 UTC m=+44.283428470" watchObservedRunningTime="2025-12-18 11:05:08.051271132 +0000 UTC m=+44.283792990" Dec 18 11:05:08.054000 audit: BPF prog-id=212 op=LOAD Dec 18 11:05:08.055000 audit: BPF prog-id=213 op=LOAD Dec 18 11:05:08.055000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.055000 audit: BPF prog-id=213 op=UNLOAD Dec 18 11:05:08.055000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.056000 audit: BPF prog-id=214 op=LOAD Dec 18 11:05:08.056000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.056000 audit: BPF prog-id=215 op=LOAD Dec 18 11:05:08.056000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.056000 audit: BPF prog-id=215 op=UNLOAD Dec 18 11:05:08.056000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.056000 audit: BPF prog-id=214 op=UNLOAD Dec 18 11:05:08.056000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.056000 audit: BPF prog-id=216 op=LOAD Dec 18 11:05:08.056000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4651 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303935383330656666343332333365323661323961303662633634 Dec 18 11:05:08.060266 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:08.065000 audit[4682]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:08.065000 audit[4682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffbd5ef90 a2=0 a3=1 items=0 ppid=2850 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:08.072000 audit[4682]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:08.072000 audit[4682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffbd5ef90 a2=0 a3=1 items=0 ppid=2850 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:08.079192 systemd-networkd[1350]: cali8641a78f756: Link UP Dec 18 11:05:08.079416 systemd-networkd[1350]: cali8641a78f756: Gained carrier Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.899 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tdf8q-eth0 csi-node-driver- calico-system de13faa1-4005-4e4c-bebe-9b34acc642ce 782 0 2025-12-18 11:04:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tdf8q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8641a78f756 [] [] }} ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.899 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.930 [INFO][4619] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" HandleID="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Workload="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.930 [INFO][4619] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" HandleID="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Workload="localhost-k8s-csi--node--driver--tdf8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tdf8q", "timestamp":"2025-12-18 11:05:07.930149618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.930 [INFO][4619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:07.961 [INFO][4619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.038 [INFO][4619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.045 [INFO][4619] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.049 [INFO][4619] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.053 [INFO][4619] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.057 [INFO][4619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.057 [INFO][4619] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.060 [INFO][4619] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.064 [INFO][4619] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.072 [INFO][4619] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.073 [INFO][4619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" host="localhost" Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.073 [INFO][4619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:08.097641 containerd[1520]: 2025-12-18 11:05:08.073 [INFO][4619] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" HandleID="k8s-pod-network.2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Workload="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.075 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tdf8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de13faa1-4005-4e4c-bebe-9b34acc642ce", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tdf8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8641a78f756", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.076 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.076 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8641a78f756 ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.080 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.081 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tdf8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de13faa1-4005-4e4c-bebe-9b34acc642ce", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a", Pod:"csi-node-driver-tdf8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8641a78f756", MAC:"4e:36:5f:8a:5b:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:08.098196 containerd[1520]: 2025-12-18 11:05:08.092 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" Namespace="calico-system" Pod="csi-node-driver-tdf8q" WorkloadEndpoint="localhost-k8s-csi--node--driver--tdf8q-eth0" Dec 18 11:05:08.106578 containerd[1520]: time="2025-12-18T11:05:08.106546027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fb4cdc46-f2s62,Uid:68f46b97-44ea-43d3-8c4b-06c74ab4d137,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"29095830eff43233e26a29a06bc64d398e3030082755399925cfc3fba01551fe\"" Dec 18 11:05:08.108052 containerd[1520]: time="2025-12-18T11:05:08.108015227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:08.107000 audit[4697]: NETFILTER_CFG table=filter:137 family=2 entries=58 op=nft_register_chain pid=4697 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:08.107000 audit[4697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27180 a0=3 a1=ffffe3340e20 a2=0 a3=ffff961e7fa8 items=0 ppid=4129 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.107000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:08.125957 containerd[1520]: time="2025-12-18T11:05:08.125840152Z" level=info msg="connecting to shim 2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a" address="unix:///run/containerd/s/eae52db3060a55d5482f7ac78c8e22e09e32c8cb34e2e313d0b1f941e3819587" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:08.153913 systemd[1]: Started cri-containerd-2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a.scope - libcontainer container 2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a. Dec 18 11:05:08.162000 audit: BPF prog-id=217 op=LOAD Dec 18 11:05:08.163000 audit: BPF prog-id=218 op=LOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=218 op=UNLOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=219 op=LOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=220 op=LOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=220 op=UNLOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=219 op=UNLOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.163000 audit: BPF prog-id=221 op=LOAD Dec 18 11:05:08.163000 audit[4718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4707 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:08.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383633323863636438353636386637323831333265643935363430 Dec 18 11:05:08.164800 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:08.176774 containerd[1520]: time="2025-12-18T11:05:08.176740526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdf8q,Uid:de13faa1-4005-4e4c-bebe-9b34acc642ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c86328ccd85668f728132ed9564040388fadaf4389bad03a232b551ca51263a\"" Dec 18 11:05:08.325115 containerd[1520]: time="2025-12-18T11:05:08.325057286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:08.325943 containerd[1520]: time="2025-12-18T11:05:08.325909646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:08.326011 containerd[1520]: time="2025-12-18T11:05:08.325966766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:08.326162 kubelet[2695]: E1218 11:05:08.326125 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:08.326242 kubelet[2695]: E1218 11:05:08.326177 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:08.326482 containerd[1520]: time="2025-12-18T11:05:08.326463366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 18 11:05:08.326558 kubelet[2695]: E1218 11:05:08.326446 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg7gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54fb4cdc46-f2s62_calico-apiserver(68f46b97-44ea-43d3-8c4b-06c74ab4d137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:08.327828 kubelet[2695]: E1218 11:05:08.327795 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:08.446898 systemd-networkd[1350]: cali8021d7692ec: Gained IPv6LL Dec 18 11:05:08.590169 containerd[1520]: time="2025-12-18T11:05:08.590031038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:08.591187 containerd[1520]: time="2025-12-18T11:05:08.591149718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 18 11:05:08.591390 containerd[1520]: time="2025-12-18T11:05:08.591171678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:08.591421 kubelet[2695]: E1218 11:05:08.591372 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:08.591455 kubelet[2695]: E1218 11:05:08.591420 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:08.591585 kubelet[2695]: E1218 11:05:08.591542 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:08.593669 containerd[1520]: time="2025-12-18T11:05:08.593555159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 18 11:05:08.810986 containerd[1520]: time="2025-12-18T11:05:08.810928337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:08.811944 containerd[1520]: time="2025-12-18T11:05:08.811902818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 18 11:05:08.812042 containerd[1520]: time="2025-12-18T11:05:08.811984098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:08.812168 kubelet[2695]: E1218 11:05:08.812109 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:08.812216 kubelet[2695]: E1218 11:05:08.812167 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:08.812330 kubelet[2695]: E1218 11:05:08.812283 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:08.813499 kubelet[2695]: E1218 11:05:08.813462 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:08.856714 containerd[1520]: time="2025-12-18T11:05:08.856600390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6j22f,Uid:400efbcf-c6cb-4177-bb40-b857f3dc9989,Namespace:calico-system,Attempt:0,}" Dec 18 11:05:08.856817 containerd[1520]: time="2025-12-18T11:05:08.856745590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-dflk9,Uid:fc3f310c-1439-4243-ade3-cd849e5460ff,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:05:08.861807 containerd[1520]: time="2025-12-18T11:05:08.861771591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-xhw45,Uid:bdeec125-ec5f-4e1e-9801-c884f349294d,Namespace:calico-apiserver,Attempt:0,}" Dec 18 11:05:08.892515 kubelet[2695]: I1218 11:05:08.892481 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 18 11:05:08.892893 kubelet[2695]: E1218 11:05:08.892872 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:09.027710 systemd-networkd[1350]: cali0e552f4c804: Link UP Dec 18 11:05:09.028524 systemd-networkd[1350]: cali0e552f4c804: Gained carrier Dec 18 11:05:09.039926 kubelet[2695]: E1218 11:05:09.039878 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:09.057626 kubelet[2695]: E1218 11:05:09.056911 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:09.057945 kubelet[2695]: E1218 11:05:09.057382 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:09.057945 kubelet[2695]: E1218 11:05:09.057473 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:09.057945 kubelet[2695]: E1218 11:05:09.057896 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:09.058080 kubelet[2695]: E1218 11:05:09.057988 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.911 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--6j22f-eth0 goldmane-666569f655- calico-system 400efbcf-c6cb-4177-bb40-b857f3dc9989 879 0 2025-12-18 11:04:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-6j22f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0e552f4c804 [] [] }} ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.911 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.954 [INFO][4793] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" HandleID="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Workload="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.954 [INFO][4793] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" HandleID="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Workload="localhost-k8s-goldmane--666569f655--6j22f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003555f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-6j22f", "timestamp":"2025-12-18 11:05:08.954116856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.954 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.954 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.954 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.967 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.976 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.988 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.990 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.992 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.992 [INFO][4793] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:08.994 [INFO][4793] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43 Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:09.004 [INFO][4793] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:09.019 [INFO][4793] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:09.019 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" host="localhost" Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:09.019 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:09.059562 containerd[1520]: 2025-12-18 11:05:09.020 [INFO][4793] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" HandleID="k8s-pod-network.a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Workload="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.023 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--6j22f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"400efbcf-c6cb-4177-bb40-b857f3dc9989", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-6j22f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e552f4c804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.025 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.025 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e552f4c804 ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.029 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.031 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--6j22f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"400efbcf-c6cb-4177-bb40-b857f3dc9989", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43", Pod:"goldmane-666569f655-6j22f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e552f4c804", MAC:"b6:93:88:86:03:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.063159 containerd[1520]: 2025-12-18 11:05:09.049 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" Namespace="calico-system" Pod="goldmane-666569f655-6j22f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--6j22f-eth0" Dec 18 11:05:09.088425 systemd-networkd[1350]: calie31ebde539c: Gained IPv6LL Dec 18 11:05:09.106000 audit[4872]: NETFILTER_CFG table=filter:138 family=2 entries=66 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:09.109775 kernel: kauditd_printk_skb: 350 callbacks suppressed Dec 18 11:05:09.109870 kernel: audit: type=1325 audit(1766055909.106:685): table=filter:138 family=2 entries=66 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:09.106000 audit[4872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32768 a0=3 a1=ffffdf36e3a0 a2=0 a3=ffffb39b9fa8 items=0 ppid=4129 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.115767 kernel: audit: type=1300 audit(1766055909.106:685): arch=c00000b7 syscall=211 success=yes exit=32768 a0=3 a1=ffffdf36e3a0 a2=0 a3=ffffb39b9fa8 items=0 ppid=4129 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.106000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:09.120862 kernel: audit: type=1327 audit(1766055909.106:685): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:09.121011 containerd[1520]: time="2025-12-18T11:05:09.120959299Z" level=info msg="connecting to shim a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43" address="unix:///run/containerd/s/61ce7ac7406e2c90ca2a33188956cfdab23692cee5dcd5efbca8d2b6022ecddc" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:09.142000 audit[4896]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:09.142000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc09cbf50 a2=0 a3=1 items=0 ppid=2850 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:09.151410 kernel: audit: type=1325 audit(1766055909.142:686): table=filter:139 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:09.151439 kernel: audit: type=1300 audit(1766055909.142:686): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc09cbf50 a2=0 a3=1 items=0 ppid=2850 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.151466 kernel: audit: type=1327 audit(1766055909.142:686): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:09.149000 audit[4896]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:09.155647 kernel: audit: type=1325 audit(1766055909.149:687): table=nat:140 family=2 entries=20 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:09.149000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc09cbf50 a2=0 a3=1 items=0 ppid=2850 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.160541 kernel: audit: type=1300 audit(1766055909.149:687): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc09cbf50 a2=0 a3=1 items=0 ppid=2850 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:09.162165 systemd-networkd[1350]: calia749b8e67d4: Link UP Dec 18 11:05:09.163825 kernel: audit: type=1327 audit(1766055909.149:687): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:09.162386 systemd-networkd[1350]: calia749b8e67d4: Gained carrier Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:08.914 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0 calico-apiserver-68744c49b- calico-apiserver fc3f310c-1439-4243-ade3-cd849e5460ff 881 0 2025-12-18 11:04:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68744c49b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68744c49b-dflk9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia749b8e67d4 [] [] }} ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:08.915 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:08.962 [INFO][4799] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" HandleID="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Workload="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:08.962 [INFO][4799] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" HandleID="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Workload="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68744c49b-dflk9", "timestamp":"2025-12-18 11:05:08.962128298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:08.962 [INFO][4799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.019 [INFO][4799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.019 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.068 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.077 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.095 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.103 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.107 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.109 [INFO][4799] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.113 [INFO][4799] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.126 [INFO][4799] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.139 [INFO][4799] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.140 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" host="localhost" Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.140 [INFO][4799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:09.182650 containerd[1520]: 2025-12-18 11:05:09.141 [INFO][4799] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" HandleID="k8s-pod-network.2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Workload="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.148 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0", GenerateName:"calico-apiserver-68744c49b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc3f310c-1439-4243-ade3-cd849e5460ff", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68744c49b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68744c49b-dflk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia749b8e67d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.148 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.148 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia749b8e67d4 ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.162 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.162 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0", GenerateName:"calico-apiserver-68744c49b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc3f310c-1439-4243-ade3-cd849e5460ff", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68744c49b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c", Pod:"calico-apiserver-68744c49b-dflk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia749b8e67d4", MAC:"a2:e1:e4:54:20:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.183165 containerd[1520]: 2025-12-18 11:05:09.180 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-dflk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--dflk9-eth0" Dec 18 11:05:09.188001 systemd[1]: Started cri-containerd-a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43.scope - libcontainer container a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43. Dec 18 11:05:09.199000 audit[4925]: NETFILTER_CFG table=filter:141 family=2 entries=59 op=nft_register_chain pid=4925 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:09.199000 audit[4925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29460 a0=3 a1=ffffef7e1790 a2=0 a3=ffff971e0fa8 items=0 ppid=4129 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.199000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:09.203728 kernel: audit: type=1325 audit(1766055909.199:688): table=filter:141 family=2 entries=59 op=nft_register_chain pid=4925 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:09.213000 audit: BPF prog-id=222 op=LOAD Dec 18 11:05:09.213000 audit: BPF prog-id=223 op=LOAD Dec 18 11:05:09.213000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=223 op=UNLOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=224 op=LOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=225 op=LOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=225 op=UNLOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=224 op=UNLOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.214000 audit: BPF prog-id=226 op=LOAD Dec 18 11:05:09.214000 audit[4895]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4884 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130663763343839613462376239343738643136623633396433303261 Dec 18 11:05:09.217092 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:09.224258 containerd[1520]: time="2025-12-18T11:05:09.224203805Z" level=info msg="connecting to shim 2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c" address="unix:///run/containerd/s/3d6c9728e48f1cca58964f19787cabc5302bb0cc577152f0d62f2a0b8cfa0c9e" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:09.236793 systemd-networkd[1350]: cali1d8b92b3126: Link UP Dec 18 11:05:09.237432 systemd-networkd[1350]: cali1d8b92b3126: Gained carrier Dec 18 11:05:09.256058 systemd[1]: Started cri-containerd-2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c.scope - libcontainer container 2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c. Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:08.936 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0 calico-apiserver-68744c49b- calico-apiserver bdeec125-ec5f-4e1e-9801-c884f349294d 883 0 2025-12-18 11:04:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68744c49b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68744c49b-xhw45 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d8b92b3126 [] [] }} ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:08.936 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:08.978 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" HandleID="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Workload="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:08.978 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" HandleID="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Workload="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004287a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68744c49b-xhw45", "timestamp":"2025-12-18 11:05:08.978544383 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:08.978 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.141 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.141 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.168 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.183 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.193 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.197 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.202 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.203 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.205 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836 Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.209 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.218 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.218 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" host="localhost" Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.218 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 18 11:05:09.256858 containerd[1520]: 2025-12-18 11:05:09.218 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" HandleID="k8s-pod-network.9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Workload="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.227 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0", GenerateName:"calico-apiserver-68744c49b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdeec125-ec5f-4e1e-9801-c884f349294d", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68744c49b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68744c49b-xhw45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d8b92b3126", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.228 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.228 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d8b92b3126 ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.237 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.238 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0", GenerateName:"calico-apiserver-68744c49b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdeec125-ec5f-4e1e-9801-c884f349294d", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 18, 11, 4, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68744c49b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836", Pod:"calico-apiserver-68744c49b-xhw45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d8b92b3126", MAC:"52:2d:9d:36:d2:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 18 11:05:09.257395 containerd[1520]: 2025-12-18 11:05:09.251 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" Namespace="calico-apiserver" Pod="calico-apiserver-68744c49b-xhw45" WorkloadEndpoint="localhost-k8s-calico--apiserver--68744c49b--xhw45-eth0" Dec 18 11:05:09.266926 containerd[1520]: time="2025-12-18T11:05:09.266865696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6j22f,Uid:400efbcf-c6cb-4177-bb40-b857f3dc9989,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0f7c489a4b7b9478d16b639d302ab4cff75c24af35f88d831288914557b4e43\"" Dec 18 11:05:09.269279 containerd[1520]: time="2025-12-18T11:05:09.269217817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 18 11:05:09.271000 audit: BPF prog-id=227 op=LOAD Dec 18 11:05:09.272000 audit: BPF prog-id=228 op=LOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=228 op=UNLOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=229 op=LOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=230 op=LOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=230 op=UNLOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=229 op=UNLOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.272000 audit: BPF prog-id=231 op=LOAD Dec 18 11:05:09.272000 audit[4947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4935 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262643934616138646538373435633830613861313366333832326534 Dec 18 11:05:09.274252 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:09.275000 audit[4981]: NETFILTER_CFG table=filter:142 family=2 entries=53 op=nft_register_chain pid=4981 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 18 11:05:09.275000 audit[4981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26592 a0=3 a1=fffffdcd3340 a2=0 a3=ffff93b45fa8 items=0 ppid=4129 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 18 11:05:09.278862 systemd-networkd[1350]: calic0a615a47ed: Gained IPv6LL Dec 18 11:05:09.282485 containerd[1520]: time="2025-12-18T11:05:09.282086580Z" level=info msg="connecting to shim 9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836" address="unix:///run/containerd/s/91277a127b698f0dac83f94ddc3417c95a6bb7568489166cfa25e2bbf1c9cd9b" namespace=k8s.io protocol=ttrpc version=3 Dec 18 11:05:09.299743 containerd[1520]: time="2025-12-18T11:05:09.299505264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-dflk9,Uid:fc3f310c-1439-4243-ade3-cd849e5460ff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2bd94aa8de8745c80a8a13f3822e452d8933214249b6b294b92b54d08ff25a3c\"" Dec 18 11:05:09.318990 systemd[1]: Started cri-containerd-9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836.scope - libcontainer container 9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836. Dec 18 11:05:09.329000 audit: BPF prog-id=232 op=LOAD Dec 18 11:05:09.330000 audit: BPF prog-id=233 op=LOAD Dec 18 11:05:09.330000 audit[5002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.330000 audit: BPF prog-id=233 op=UNLOAD Dec 18 11:05:09.330000 audit[5002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.331000 audit: BPF prog-id=234 op=LOAD Dec 18 11:05:09.331000 audit[5002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.331000 audit: BPF prog-id=235 op=LOAD Dec 18 11:05:09.331000 audit[5002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.331000 audit: BPF prog-id=235 op=UNLOAD Dec 18 11:05:09.331000 audit[5002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.331000 audit: BPF prog-id=234 op=UNLOAD Dec 18 11:05:09.331000 audit[5002]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.331000 audit: BPF prog-id=236 op=LOAD Dec 18 11:05:09.331000 audit[5002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4990 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:09.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393531303165313739333439656164613438303238333932363562 Dec 18 11:05:09.334877 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 18 11:05:09.362070 containerd[1520]: time="2025-12-18T11:05:09.361963840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68744c49b-xhw45,Uid:bdeec125-ec5f-4e1e-9801-c884f349294d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9595101e179349eada4802839265b9b507aa260baf797c8150525843ddab0836\"" Dec 18 11:05:09.406912 systemd-networkd[1350]: cali8641a78f756: Gained IPv6LL Dec 18 11:05:09.499505 containerd[1520]: time="2025-12-18T11:05:09.499459795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:09.500393 containerd[1520]: time="2025-12-18T11:05:09.500353755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 18 11:05:09.500479 containerd[1520]: time="2025-12-18T11:05:09.500366915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:09.500604 kubelet[2695]: E1218 11:05:09.500572 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:09.500738 kubelet[2695]: E1218 11:05:09.500617 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:09.500872 kubelet[2695]: E1218 11:05:09.500820 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85t57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6j22f_calico-system(400efbcf-c6cb-4177-bb40-b857f3dc9989): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:09.501045 containerd[1520]: time="2025-12-18T11:05:09.501020516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:09.502358 kubelet[2695]: E1218 11:05:09.502296 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:09.716643 containerd[1520]: time="2025-12-18T11:05:09.716521210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:09.717571 containerd[1520]: time="2025-12-18T11:05:09.717534851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:09.717651 containerd[1520]: time="2025-12-18T11:05:09.717602131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:09.717786 kubelet[2695]: E1218 11:05:09.717746 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:09.717823 kubelet[2695]: E1218 11:05:09.717801 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:09.718075 kubelet[2695]: E1218 11:05:09.718028 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8djv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-dflk9_calico-apiserver(fc3f310c-1439-4243-ade3-cd849e5460ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:09.718220 containerd[1520]: time="2025-12-18T11:05:09.718195691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:09.719747 kubelet[2695]: E1218 11:05:09.719704 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:09.938741 containerd[1520]: time="2025-12-18T11:05:09.938677947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:09.951779 containerd[1520]: time="2025-12-18T11:05:09.951704830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:09.951886 containerd[1520]: time="2025-12-18T11:05:09.951751870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:09.952002 kubelet[2695]: E1218 11:05:09.951952 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:09.952109 kubelet[2695]: E1218 11:05:09.952005 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:09.952197 kubelet[2695]: E1218 11:05:09.952157 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jstfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-xhw45_calico-apiserver(bdeec125-ec5f-4e1e-9801-c884f349294d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:09.953410 kubelet[2695]: E1218 11:05:09.953351 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:10.060296 kubelet[2695]: E1218 11:05:10.060254 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:10.063092 kubelet[2695]: E1218 11:05:10.063027 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:10.064670 kubelet[2695]: E1218 11:05:10.064569 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:10.064890 kubelet[2695]: E1218 11:05:10.064858 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:10.064961 kubelet[2695]: E1218 11:05:10.064938 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:10.068237 kubelet[2695]: E1218 11:05:10.065248 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:10.088000 audit[5040]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:10.088000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3462c90 a2=0 a3=1 items=0 ppid=2850 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:10.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:10.110000 audit[5040]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:10.110000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc3462c90 a2=0 a3=1 items=0 ppid=2850 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:10.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:10.879057 systemd-networkd[1350]: calia749b8e67d4: Gained IPv6LL Dec 18 11:05:11.066843 kubelet[2695]: E1218 11:05:11.066260 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:11.067416 kubelet[2695]: E1218 11:05:11.066873 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:11.067416 kubelet[2695]: E1218 11:05:11.066790 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:11.067416 kubelet[2695]: E1218 11:05:11.067068 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:11.071294 systemd-networkd[1350]: cali0e552f4c804: Gained IPv6LL Dec 18 11:05:11.128000 audit[5043]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:11.128000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcf2684a0 a2=0 a3=1 items=0 ppid=2850 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:11.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:11.143000 audit[5043]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:11.143000 audit[5043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcf2684a0 a2=0 a3=1 items=0 ppid=2850 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:11.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:11.262858 systemd-networkd[1350]: cali1d8b92b3126: Gained IPv6LL Dec 18 11:05:12.853159 systemd[1]: Started sshd@9-12289-10.0.0.27:22-10.0.0.1:40966.service - OpenSSH per-connection server daemon (10.0.0.1:40966). Dec 18 11:05:12.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-12289-10.0.0.27:22-10.0.0.1:40966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:12.924000 audit[5045]: AUDIT1101 pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:12.925784 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 40966 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:12.925000 audit[5045]: AUDIT1103 pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:12.926000 audit[5045]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff48bcd90 a2=3 a3=0 items=0 ppid=1 pid=5045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:12.926000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:12.927481 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:12.932005 systemd-logind[1499]: New session '11' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:12.940916 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 18 11:05:12.943000 audit[5045]: AUDIT1105 pid=5045 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:12.944000 audit[5049]: AUDIT1103 pid=5049 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.034112 sshd[5049]: Connection closed by 10.0.0.1 port 40966 Dec 18 11:05:13.035734 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:13.035000 audit[5045]: AUDIT1106 pid=5045 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.035000 audit[5045]: AUDIT1104 pid=5045 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.044796 systemd[1]: sshd@9-12289-10.0.0.27:22-10.0.0.1:40966.service: Deactivated successfully. Dec 18 11:05:13.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-12289-10.0.0.27:22-10.0.0.1:40966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:13.048545 systemd[1]: session-11.scope: Deactivated successfully. Dec 18 11:05:13.050295 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. Dec 18 11:05:13.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-12290-10.0.0.27:22-10.0.0.1:40976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:13.053278 systemd[1]: Started sshd@10-12290-10.0.0.27:22-10.0.0.1:40976.service - OpenSSH per-connection server daemon (10.0.0.1:40976). Dec 18 11:05:13.053954 systemd-logind[1499]: Removed session 11. Dec 18 11:05:13.107000 audit[5064]: AUDIT1101 pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.108630 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 40976 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:13.108000 audit[5064]: AUDIT1103 pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.108000 audit[5064]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9fab110 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:13.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:13.110137 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:13.113737 systemd-logind[1499]: New session '12' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:13.119967 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 18 11:05:13.121000 audit[5064]: AUDIT1105 pid=5064 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.123000 audit[5068]: AUDIT1103 pid=5068 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.294810 sshd[5068]: Connection closed by 10.0.0.1 port 40976 Dec 18 11:05:13.296237 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:13.296000 audit[5064]: AUDIT1106 pid=5064 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.296000 audit[5064]: AUDIT1104 pid=5064 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.309017 systemd[1]: sshd@10-12290-10.0.0.27:22-10.0.0.1:40976.service: Deactivated successfully. Dec 18 11:05:13.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-12290-10.0.0.27:22-10.0.0.1:40976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:13.312015 systemd[1]: session-12.scope: Deactivated successfully. Dec 18 11:05:13.316298 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. Dec 18 11:05:13.320723 systemd[1]: Started sshd@11-4100-10.0.0.27:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Dec 18 11:05:13.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-4100-10.0.0.27:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:13.321769 systemd-logind[1499]: Removed session 12. Dec 18 11:05:13.380000 audit[5079]: AUDIT1101 pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.381599 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:13.381000 audit[5079]: AUDIT1103 pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.382000 audit[5079]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3839db0 a2=3 a3=0 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:13.382000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:13.383520 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:13.388510 systemd-logind[1499]: New session '13' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:13.396909 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 18 11:05:13.398000 audit[5079]: AUDIT1105 pid=5079 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.399000 audit[5083]: AUDIT1103 pid=5083 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.492149 sshd[5083]: Connection closed by 10.0.0.1 port 40982 Dec 18 11:05:13.492474 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:13.492000 audit[5079]: AUDIT1106 pid=5079 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.492000 audit[5079]: AUDIT1104 pid=5079 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:13.496684 systemd[1]: sshd@11-4100-10.0.0.27:22-10.0.0.1:40982.service: Deactivated successfully. Dec 18 11:05:13.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-4100-10.0.0.27:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:13.498657 systemd[1]: session-13.scope: Deactivated successfully. Dec 18 11:05:13.499454 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. Dec 18 11:05:13.500370 systemd-logind[1499]: Removed session 13. Dec 18 11:05:14.858878 containerd[1520]: time="2025-12-18T11:05:14.858834466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 18 11:05:15.073532 containerd[1520]: time="2025-12-18T11:05:15.073495904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:15.074519 containerd[1520]: time="2025-12-18T11:05:15.074490424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 18 11:05:15.074850 containerd[1520]: time="2025-12-18T11:05:15.074553384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:15.074902 kubelet[2695]: E1218 11:05:15.074654 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:15.074902 kubelet[2695]: E1218 11:05:15.074693 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:15.074902 kubelet[2695]: E1218 11:05:15.074809 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9349b312f37c4e9f929d49397049c45a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:15.076950 containerd[1520]: time="2025-12-18T11:05:15.076927585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 18 11:05:15.316029 containerd[1520]: time="2025-12-18T11:05:15.315970426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:15.326377 containerd[1520]: time="2025-12-18T11:05:15.326303228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 18 11:05:15.326460 containerd[1520]: time="2025-12-18T11:05:15.326431868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:15.326668 kubelet[2695]: E1218 11:05:15.326626 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:15.326732 kubelet[2695]: E1218 11:05:15.326680 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:15.326867 kubelet[2695]: E1218 11:05:15.326805 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:15.327970 kubelet[2695]: E1218 11:05:15.327942 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d6c7bc4f-mf4nr" podUID="f07f5925-2bfb-4163-b634-d7861acf227f" Dec 18 11:05:18.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-8194-10.0.0.27:22-10.0.0.1:40996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:18.506249 systemd[1]: Started sshd@12-8194-10.0.0.27:22-10.0.0.1:40996.service - OpenSSH per-connection server daemon (10.0.0.1:40996). Dec 18 11:05:18.507033 kernel: kauditd_printk_skb: 116 callbacks suppressed Dec 18 11:05:18.507079 kernel: audit: type=1130 audit(1766055918.505:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-8194-10.0.0.27:22-10.0.0.1:40996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:18.556000 audit[5110]: AUDIT1101 pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.556930 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 40996 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:18.559000 audit[5110]: AUDIT1103 pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.561175 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:18.563048 kernel: audit: type=1101 audit(1766055918.556:746): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.563097 kernel: audit: type=1103 audit(1766055918.559:747): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.564776 kernel: audit: type=1006 audit(1766055918.559:748): pid=5110 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 18 11:05:18.559000 audit[5110]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb8d3fd0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:18.565695 systemd-logind[1499]: New session '14' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:18.567936 kernel: audit: type=1300 audit(1766055918.559:748): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb8d3fd0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:18.559000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:18.568085 kernel: audit: type=1327 audit(1766055918.559:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:18.568921 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 18 11:05:18.570000 audit[5110]: AUDIT1105 pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.571000 audit[5114]: AUDIT1103 pid=5114 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.577706 kernel: audit: type=1105 audit(1766055918.570:749): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.577775 kernel: audit: type=1103 audit(1766055918.571:750): pid=5114 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.650414 sshd[5114]: Connection closed by 10.0.0.1 port 40996 Dec 18 11:05:18.651168 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:18.650000 audit[5110]: AUDIT1106 pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.655604 systemd[1]: sshd@12-8194-10.0.0.27:22-10.0.0.1:40996.service: Deactivated successfully. Dec 18 11:05:18.656755 kernel: audit: type=1106 audit(1766055918.650:751): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.651000 audit[5110]: AUDIT1104 pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.659037 systemd[1]: session-14.scope: Deactivated successfully. Dec 18 11:05:18.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-8194-10.0.0.27:22-10.0.0.1:40996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:18.660619 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. Dec 18 11:05:18.660732 kernel: audit: type=1104 audit(1766055918.651:752): pid=5110 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:18.662367 systemd-logind[1499]: Removed session 14. Dec 18 11:05:20.856812 containerd[1520]: time="2025-12-18T11:05:20.856772959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 18 11:05:21.134130 containerd[1520]: time="2025-12-18T11:05:21.133997152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:21.135070 containerd[1520]: time="2025-12-18T11:05:21.135024472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 18 11:05:21.135149 containerd[1520]: time="2025-12-18T11:05:21.135114472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:21.135288 kubelet[2695]: E1218 11:05:21.135251 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:21.135603 kubelet[2695]: E1218 11:05:21.135301 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:21.135603 kubelet[2695]: E1218 11:05:21.135420 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:21.137580 containerd[1520]: time="2025-12-18T11:05:21.137558032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 18 11:05:21.347269 containerd[1520]: time="2025-12-18T11:05:21.347215777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:21.348268 containerd[1520]: time="2025-12-18T11:05:21.348223017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 18 11:05:21.348363 containerd[1520]: time="2025-12-18T11:05:21.348308857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:21.348459 kubelet[2695]: E1218 11:05:21.348426 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:21.348500 kubelet[2695]: E1218 11:05:21.348472 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:21.348922 kubelet[2695]: E1218 11:05:21.348591 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:21.350144 kubelet[2695]: E1218 11:05:21.350037 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:22.857511 containerd[1520]: time="2025-12-18T11:05:22.857471587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:23.075742 containerd[1520]: time="2025-12-18T11:05:23.075682771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:23.076698 containerd[1520]: time="2025-12-18T11:05:23.076657131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:23.076791 containerd[1520]: time="2025-12-18T11:05:23.076753371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:23.076944 kubelet[2695]: E1218 11:05:23.076910 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:23.077311 kubelet[2695]: E1218 11:05:23.076958 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:23.077311 kubelet[2695]: E1218 11:05:23.077200 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg7gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54fb4cdc46-f2s62_calico-apiserver(68f46b97-44ea-43d3-8c4b-06c74ab4d137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:23.077509 containerd[1520]: time="2025-12-18T11:05:23.077451011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:23.078710 kubelet[2695]: E1218 11:05:23.078675 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:23.273099 containerd[1520]: time="2025-12-18T11:05:23.273032871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:23.273995 containerd[1520]: time="2025-12-18T11:05:23.273962511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:23.274126 containerd[1520]: time="2025-12-18T11:05:23.274056351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:23.274259 kubelet[2695]: E1218 11:05:23.274221 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:23.274320 kubelet[2695]: E1218 11:05:23.274270 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:23.274710 kubelet[2695]: E1218 11:05:23.274393 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jstfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-xhw45_calico-apiserver(bdeec125-ec5f-4e1e-9801-c884f349294d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:23.275931 kubelet[2695]: E1218 11:05:23.275902 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:23.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-8195-10.0.0.27:22-10.0.0.1:36910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:23.666245 systemd[1]: Started sshd@13-8195-10.0.0.27:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Dec 18 11:05:23.667375 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 18 11:05:23.667460 kernel: audit: type=1130 audit(1766055923.665:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-8195-10.0.0.27:22-10.0.0.1:36910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:23.715000 audit[5129]: AUDIT1101 pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.716221 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:23.718994 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:23.717000 audit[5129]: AUDIT1103 pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.722414 kernel: audit: type=1101 audit(1766055923.715:755): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.722453 kernel: audit: type=1103 audit(1766055923.717:756): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.724498 kernel: audit: type=1006 audit(1766055923.717:757): pid=5129 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 18 11:05:23.717000 audit[5129]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0c39f80 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:23.726360 systemd-logind[1499]: New session '15' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:23.727792 kernel: audit: type=1300 audit(1766055923.717:757): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0c39f80 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:23.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:23.727875 kernel: audit: type=1327 audit(1766055923.717:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:23.731932 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 18 11:05:23.733000 audit[5129]: AUDIT1105 pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.734000 audit[5133]: AUDIT1103 pid=5133 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.737888 kernel: audit: type=1105 audit(1766055923.733:758): pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.737909 kernel: audit: type=1103 audit(1766055923.734:759): pid=5133 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.816463 sshd[5133]: Connection closed by 10.0.0.1 port 36910 Dec 18 11:05:23.816797 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:23.816000 audit[5129]: AUDIT1106 pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.816000 audit[5129]: AUDIT1104 pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.824405 kernel: audit: type=1106 audit(1766055923.816:760): pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.823537 systemd[1]: sshd@13-8195-10.0.0.27:22-10.0.0.1:36910.service: Deactivated successfully. Dec 18 11:05:23.824583 kernel: audit: type=1104 audit(1766055923.816:761): pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:23.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-8195-10.0.0.27:22-10.0.0.1:36910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:23.827483 systemd[1]: session-15.scope: Deactivated successfully. Dec 18 11:05:23.829710 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. Dec 18 11:05:23.831188 systemd-logind[1499]: Removed session 15. Dec 18 11:05:23.857239 containerd[1520]: time="2025-12-18T11:05:23.857204331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 18 11:05:24.069725 containerd[1520]: time="2025-12-18T11:05:24.069677112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:24.070807 containerd[1520]: time="2025-12-18T11:05:24.070762513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 18 11:05:24.070877 containerd[1520]: time="2025-12-18T11:05:24.070825553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:24.071233 kubelet[2695]: E1218 11:05:24.070991 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:24.071233 kubelet[2695]: E1218 11:05:24.071052 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:24.071233 kubelet[2695]: E1218 11:05:24.071175 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2r7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-779d74d86-dfx2b_calico-system(91434238-f80e-42f1-bb48-55395c65ff33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:24.072435 kubelet[2695]: E1218 11:05:24.072399 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:24.856790 containerd[1520]: time="2025-12-18T11:05:24.856755348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 18 11:05:25.059555 containerd[1520]: time="2025-12-18T11:05:25.059383127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:25.060500 containerd[1520]: time="2025-12-18T11:05:25.060411288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 18 11:05:25.060562 containerd[1520]: time="2025-12-18T11:05:25.060482888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:25.060637 kubelet[2695]: E1218 11:05:25.060594 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:25.061076 kubelet[2695]: E1218 11:05:25.060649 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:25.061076 kubelet[2695]: E1218 11:05:25.060810 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85t57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6j22f_calico-system(400efbcf-c6cb-4177-bb40-b857f3dc9989): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:25.062021 kubelet[2695]: E1218 11:05:25.061974 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:25.860275 containerd[1520]: time="2025-12-18T11:05:25.860031960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:26.113493 containerd[1520]: time="2025-12-18T11:05:26.113289342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:26.114475 containerd[1520]: time="2025-12-18T11:05:26.114421062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:26.116138 containerd[1520]: time="2025-12-18T11:05:26.114493622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:26.116245 kubelet[2695]: E1218 11:05:26.114639 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:26.116245 kubelet[2695]: E1218 11:05:26.114684 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:26.116245 kubelet[2695]: E1218 11:05:26.114841 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8djv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-dflk9_calico-apiserver(fc3f310c-1439-4243-ade3-cd849e5460ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:26.116700 kubelet[2695]: E1218 11:05:26.116286 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:28.830343 systemd[1]: Started sshd@14-8196-10.0.0.27:22-10.0.0.1:36924.service - OpenSSH per-connection server daemon (10.0.0.1:36924). Dec 18 11:05:28.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-8196-10.0.0.27:22-10.0.0.1:36924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:28.834155 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 18 11:05:28.834205 kernel: audit: type=1130 audit(1766055928.829:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-8196-10.0.0.27:22-10.0.0.1:36924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:28.860976 kubelet[2695]: E1218 11:05:28.860367 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d6c7bc4f-mf4nr" podUID="f07f5925-2bfb-4163-b634-d7861acf227f" Dec 18 11:05:28.897000 audit[5155]: AUDIT1101 pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.898685 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:28.898000 audit[5155]: AUDIT1103 pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.903270 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:28.905763 kernel: audit: type=1101 audit(1766055928.897:764): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.905807 kernel: audit: type=1103 audit(1766055928.898:765): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.905849 kernel: audit: type=1006 audit(1766055928.898:766): pid=5155 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 18 11:05:28.898000 audit[5155]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2b09cc0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:28.907432 systemd-logind[1499]: New session '16' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:28.910722 kernel: audit: type=1300 audit(1766055928.898:766): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2b09cc0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:28.898000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:28.911894 kernel: audit: type=1327 audit(1766055928.898:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:28.917939 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 18 11:05:28.920000 audit[5155]: AUDIT1105 pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.922000 audit[5159]: AUDIT1103 pid=5159 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.927876 kernel: audit: type=1105 audit(1766055928.920:767): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:28.927931 kernel: audit: type=1103 audit(1766055928.922:768): pid=5159 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:29.010749 sshd[5159]: Connection closed by 10.0.0.1 port 36924 Dec 18 11:05:29.011089 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:29.010000 audit[5155]: AUDIT1106 pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:29.011000 audit[5155]: AUDIT1104 pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:29.015518 systemd[1]: sshd@14-8196-10.0.0.27:22-10.0.0.1:36924.service: Deactivated successfully. Dec 18 11:05:29.017835 systemd[1]: session-16.scope: Deactivated successfully. Dec 18 11:05:29.018574 kernel: audit: type=1106 audit(1766055929.010:769): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:29.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-8196-10.0.0.27:22-10.0.0.1:36924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:29.018708 kernel: audit: type=1104 audit(1766055929.011:770): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:29.018974 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. Dec 18 11:05:29.021079 systemd-logind[1499]: Removed session 16. Dec 18 11:05:34.023380 systemd[1]: Started sshd@15-6-10.0.0.27:22-10.0.0.1:53886.service - OpenSSH per-connection server daemon (10.0.0.1:53886). Dec 18 11:05:34.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-6-10.0.0.27:22-10.0.0.1:53886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.024978 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 18 11:05:34.025013 kernel: audit: type=1130 audit(1766055934.022:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-6-10.0.0.27:22-10.0.0.1:53886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.082000 audit[5178]: AUDIT1101 pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.083253 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 53886 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:34.085000 audit[5178]: AUDIT1103 pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.086854 kernel: audit: type=1101 audit(1766055934.082:773): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.086883 kernel: audit: type=1103 audit(1766055934.085:774): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.087499 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:34.089748 kernel: audit: type=1006 audit(1766055934.086:775): pid=5178 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 18 11:05:34.086000 audit[5178]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2818490 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:34.095027 kernel: audit: type=1300 audit(1766055934.086:775): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2818490 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:34.086000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:34.096455 kernel: audit: type=1327 audit(1766055934.086:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:34.098910 systemd-logind[1499]: New session '17' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:34.106884 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 18 11:05:34.108000 audit[5178]: AUDIT1105 pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.110000 audit[5182]: AUDIT1103 pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.116085 kernel: audit: type=1105 audit(1766055934.108:776): pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.116145 kernel: audit: type=1103 audit(1766055934.110:777): pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.206036 sshd[5182]: Connection closed by 10.0.0.1 port 53886 Dec 18 11:05:34.206360 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:34.206000 audit[5178]: AUDIT1106 pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.206000 audit[5178]: AUDIT1104 pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.213505 kernel: audit: type=1106 audit(1766055934.206:778): pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.213549 kernel: audit: type=1104 audit(1766055934.206:779): pid=5178 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.220245 systemd[1]: sshd@15-6-10.0.0.27:22-10.0.0.1:53886.service: Deactivated successfully. Dec 18 11:05:34.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-6-10.0.0.27:22-10.0.0.1:53886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.223399 systemd[1]: session-17.scope: Deactivated successfully. Dec 18 11:05:34.224264 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. Dec 18 11:05:34.227672 systemd[1]: Started sshd@16-8197-10.0.0.27:22-10.0.0.1:53892.service - OpenSSH per-connection server daemon (10.0.0.1:53892). Dec 18 11:05:34.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-8197-10.0.0.27:22-10.0.0.1:53892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.228655 systemd-logind[1499]: Removed session 17. Dec 18 11:05:34.285000 audit[5196]: AUDIT1101 pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.286983 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 53892 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:34.287000 audit[5196]: AUDIT1103 pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.287000 audit[5196]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2a817a0 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:34.287000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:34.288597 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:34.293712 systemd-logind[1499]: New session '18' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:34.304948 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 18 11:05:34.306000 audit[5196]: AUDIT1105 pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.308000 audit[5200]: AUDIT1103 pid=5200 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.461788 sshd[5200]: Connection closed by 10.0.0.1 port 53892 Dec 18 11:05:34.461674 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:34.461000 audit[5196]: AUDIT1106 pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.462000 audit[5196]: AUDIT1104 pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.472068 systemd[1]: sshd@16-8197-10.0.0.27:22-10.0.0.1:53892.service: Deactivated successfully. Dec 18 11:05:34.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-8197-10.0.0.27:22-10.0.0.1:53892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.475085 systemd[1]: session-18.scope: Deactivated successfully. Dec 18 11:05:34.475834 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. Dec 18 11:05:34.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-4101-10.0.0.27:22-10.0.0.1:53908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:34.478041 systemd[1]: Started sshd@17-4101-10.0.0.27:22-10.0.0.1:53908.service - OpenSSH per-connection server daemon (10.0.0.1:53908). Dec 18 11:05:34.479449 systemd-logind[1499]: Removed session 18. Dec 18 11:05:34.564000 audit[5211]: AUDIT1101 pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.565258 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 53908 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:34.565000 audit[5211]: AUDIT1103 pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.566000 audit[5211]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6b70750 a2=3 a3=0 items=0 ppid=1 pid=5211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:34.566000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:34.568475 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:34.572596 systemd-logind[1499]: New session '19' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:34.587957 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 18 11:05:34.589000 audit[5211]: AUDIT1105 pid=5211 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.591000 audit[5215]: AUDIT1103 pid=5215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:34.857845 kubelet[2695]: E1218 11:05:34.856886 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:34.857845 kubelet[2695]: E1218 11:05:34.857234 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:35.093000 audit[5229]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:35.093000 audit[5229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffcdc2390 a2=0 a3=1 items=0 ppid=2850 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:35.102738 sshd[5215]: Connection closed by 10.0.0.1 port 53908 Dec 18 11:05:35.103525 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:35.103000 audit[5211]: AUDIT1106 pid=5211 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.103000 audit[5211]: AUDIT1104 pid=5211 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.104000 audit[5229]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:35.104000 audit[5229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffcdc2390 a2=0 a3=1 items=0 ppid=2850 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:35.111928 systemd[1]: sshd@17-4101-10.0.0.27:22-10.0.0.1:53908.service: Deactivated successfully. Dec 18 11:05:35.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-4101-10.0.0.27:22-10.0.0.1:53908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:35.114538 systemd[1]: session-19.scope: Deactivated successfully. Dec 18 11:05:35.116983 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. Dec 18 11:05:35.122857 systemd[1]: Started sshd@18-12291-10.0.0.27:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Dec 18 11:05:35.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-12291-10.0.0.27:22-10.0.0.1:53910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:35.125003 systemd-logind[1499]: Removed session 19. Dec 18 11:05:35.127000 audit[5236]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=5236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:35.127000 audit[5236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcc944b80 a2=0 a3=1 items=0 ppid=2850 pid=5236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:35.132000 audit[5236]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:35.132000 audit[5236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcc944b80 a2=0 a3=1 items=0 ppid=2850 pid=5236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:35.180000 audit[5235]: AUDIT1101 pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.181449 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:35.181000 audit[5235]: AUDIT1103 pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.181000 audit[5235]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff361ee60 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.181000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:35.183386 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:35.188321 systemd-logind[1499]: New session '20' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:35.200946 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 18 11:05:35.203000 audit[5235]: AUDIT1105 pid=5235 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.204000 audit[5240]: AUDIT1103 pid=5240 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.450198 sshd[5240]: Connection closed by 10.0.0.1 port 53910 Dec 18 11:05:35.451595 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:35.451000 audit[5235]: AUDIT1106 pid=5235 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.451000 audit[5235]: AUDIT1104 pid=5235 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-12291-10.0.0.27:22-10.0.0.1:53910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:35.460052 systemd[1]: sshd@18-12291-10.0.0.27:22-10.0.0.1:53910.service: Deactivated successfully. Dec 18 11:05:35.463212 systemd[1]: session-20.scope: Deactivated successfully. Dec 18 11:05:35.464778 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. Dec 18 11:05:35.466495 systemd[1]: Started sshd@19-4102-10.0.0.27:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Dec 18 11:05:35.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-4102-10.0.0.27:22-10.0.0.1:53926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:35.469002 systemd-logind[1499]: Removed session 20. Dec 18 11:05:35.527000 audit[5251]: AUDIT1101 pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.528872 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:35.528000 audit[5251]: AUDIT1103 pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.528000 audit[5251]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb765430 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:35.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:35.530775 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:35.535772 systemd-logind[1499]: New session '21' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:35.540907 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 18 11:05:35.543000 audit[5251]: AUDIT1105 pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.545000 audit[5255]: AUDIT1103 pid=5255 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.624254 sshd[5255]: Connection closed by 10.0.0.1 port 53926 Dec 18 11:05:35.624682 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:35.624000 audit[5251]: AUDIT1106 pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.624000 audit[5251]: AUDIT1104 pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:35.628040 systemd[1]: sshd@19-4102-10.0.0.27:22-10.0.0.1:53926.service: Deactivated successfully. Dec 18 11:05:35.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-4102-10.0.0.27:22-10.0.0.1:53926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:35.629823 systemd[1]: session-21.scope: Deactivated successfully. Dec 18 11:05:35.630561 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. Dec 18 11:05:35.631647 systemd-logind[1499]: Removed session 21. Dec 18 11:05:35.859835 kubelet[2695]: E1218 11:05:35.859778 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:36.857622 kubelet[2695]: E1218 11:05:36.857558 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:36.858007 kubelet[2695]: E1218 11:05:36.857916 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:39.857504 kubelet[2695]: E1218 11:05:39.857424 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:40.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-4103-10.0.0.27:22-10.0.0.1:53938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:40.644221 systemd[1]: Started sshd@20-4103-10.0.0.27:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Dec 18 11:05:40.645161 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 18 11:05:40.645217 kernel: audit: type=1130 audit(1766055940.643:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-4103-10.0.0.27:22-10.0.0.1:53938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:40.703000 audit[5297]: AUDIT1101 pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.704736 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:40.707043 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:40.705000 audit[5297]: AUDIT1103 pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.710934 kernel: audit: type=1101 audit(1766055940.703:822): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.711074 kernel: audit: type=1103 audit(1766055940.705:823): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.713070 kernel: audit: type=1006 audit(1766055940.705:824): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 18 11:05:40.705000 audit[5297]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf978400 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:40.716594 kernel: audit: type=1300 audit(1766055940.705:824): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf978400 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:40.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:40.718116 kernel: audit: type=1327 audit(1766055940.705:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:40.719525 systemd-logind[1499]: New session '22' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:40.731984 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 18 11:05:40.733000 audit[5297]: AUDIT1105 pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.735000 audit[5301]: AUDIT1103 pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.741515 kernel: audit: type=1105 audit(1766055940.733:825): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.741576 kernel: audit: type=1103 audit(1766055940.735:826): pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.825187 sshd[5301]: Connection closed by 10.0.0.1 port 53938 Dec 18 11:05:40.825488 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:40.825000 audit[5297]: AUDIT1106 pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.829547 systemd[1]: sshd@20-4103-10.0.0.27:22-10.0.0.1:53938.service: Deactivated successfully. Dec 18 11:05:40.825000 audit[5297]: AUDIT1104 pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-4103-10.0.0.27:22-10.0.0.1:53938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:40.832143 systemd[1]: session-22.scope: Deactivated successfully. Dec 18 11:05:40.833864 kernel: audit: type=1106 audit(1766055940.825:827): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.833170 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. Dec 18 11:05:40.833928 kernel: audit: type=1104 audit(1766055940.825:828): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:40.834559 systemd-logind[1499]: Removed session 22. Dec 18 11:05:41.213000 audit[5314]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:41.213000 audit[5314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcfcb70b0 a2=0 a3=1 items=0 ppid=2850 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:41.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:41.228000 audit[5314]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 18 11:05:41.228000 audit[5314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffcfcb70b0 a2=0 a3=1 items=0 ppid=2850 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:41.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 18 11:05:42.856668 kubelet[2695]: E1218 11:05:42.856627 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:43.860858 containerd[1520]: time="2025-12-18T11:05:43.860151403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 18 11:05:44.094580 containerd[1520]: time="2025-12-18T11:05:44.094536447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:44.095580 containerd[1520]: time="2025-12-18T11:05:44.095545735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 18 11:05:44.095740 containerd[1520]: time="2025-12-18T11:05:44.095630855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:44.095779 kubelet[2695]: E1218 11:05:44.095741 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:44.096066 kubelet[2695]: E1218 11:05:44.095791 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 18 11:05:44.096066 kubelet[2695]: E1218 11:05:44.095886 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9349b312f37c4e9f929d49397049c45a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:44.098111 containerd[1520]: time="2025-12-18T11:05:44.097967153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 18 11:05:44.304002 containerd[1520]: time="2025-12-18T11:05:44.303956078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:44.304897 containerd[1520]: time="2025-12-18T11:05:44.304859404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 18 11:05:44.304948 containerd[1520]: time="2025-12-18T11:05:44.304892645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:44.305125 kubelet[2695]: E1218 11:05:44.305057 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:44.305179 kubelet[2695]: E1218 11:05:44.305137 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 18 11:05:44.305331 kubelet[2695]: E1218 11:05:44.305278 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pp7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d6c7bc4f-mf4nr_calico-system(f07f5925-2bfb-4163-b634-d7861acf227f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:44.306530 kubelet[2695]: E1218 11:05:44.306437 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d6c7bc4f-mf4nr" podUID="f07f5925-2bfb-4163-b634-d7861acf227f" Dec 18 11:05:45.842245 systemd[1]: Started sshd@21-7-10.0.0.27:22-10.0.0.1:52076.service - OpenSSH per-connection server daemon (10.0.0.1:52076). Dec 18 11:05:45.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-7-10.0.0.27:22-10.0.0.1:52076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:45.843290 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 18 11:05:45.843395 kernel: audit: type=1130 audit(1766055945.841:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-7-10.0.0.27:22-10.0.0.1:52076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:45.891526 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 52076 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:45.890000 audit[5322]: AUDIT1101 pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.894000 audit[5322]: AUDIT1103 pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.896269 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:45.898235 kernel: audit: type=1101 audit(1766055945.890:833): pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.898276 kernel: audit: type=1103 audit(1766055945.894:834): pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.898311 kernel: audit: type=1006 audit(1766055945.894:835): pid=5322 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 18 11:05:45.894000 audit[5322]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd704c530 a2=3 a3=0 items=0 ppid=1 pid=5322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:45.902780 systemd-logind[1499]: New session '23' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:45.894000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:45.903836 kernel: audit: type=1300 audit(1766055945.894:835): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd704c530 a2=3 a3=0 items=0 ppid=1 pid=5322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:45.903866 kernel: audit: type=1327 audit(1766055945.894:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:45.906986 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 18 11:05:45.908000 audit[5322]: AUDIT1105 pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.908000 audit[5326]: AUDIT1103 pid=5326 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.916336 kernel: audit: type=1105 audit(1766055945.908:836): pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:45.916383 kernel: audit: type=1103 audit(1766055945.908:837): pid=5326 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:46.002033 sshd[5326]: Connection closed by 10.0.0.1 port 52076 Dec 18 11:05:46.003824 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:46.003000 audit[5322]: AUDIT1106 pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:46.008977 systemd[1]: sshd@21-7-10.0.0.27:22-10.0.0.1:52076.service: Deactivated successfully. Dec 18 11:05:46.004000 audit[5322]: AUDIT1104 pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:46.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-7-10.0.0.27:22-10.0.0.1:52076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:46.012297 systemd[1]: session-23.scope: Deactivated successfully. Dec 18 11:05:46.012851 kernel: audit: type=1106 audit(1766055946.003:838): pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:46.012878 kernel: audit: type=1104 audit(1766055946.004:839): pid=5322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:46.016048 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. Dec 18 11:05:46.017497 systemd-logind[1499]: Removed session 23. Dec 18 11:05:46.855857 kubelet[2695]: E1218 11:05:46.855825 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:47.857138 containerd[1520]: time="2025-12-18T11:05:47.856958872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:48.073651 containerd[1520]: time="2025-12-18T11:05:48.073475819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:48.075258 containerd[1520]: time="2025-12-18T11:05:48.074626746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:48.075345 kubelet[2695]: E1218 11:05:48.075204 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:48.075345 kubelet[2695]: E1218 11:05:48.075252 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:48.075681 kubelet[2695]: E1218 11:05:48.075493 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jstfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-xhw45_calico-apiserver(bdeec125-ec5f-4e1e-9801-c884f349294d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:48.075832 containerd[1520]: time="2025-12-18T11:05:48.075543713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 18 11:05:48.076058 containerd[1520]: time="2025-12-18T11:05:48.075891235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:48.077197 kubelet[2695]: E1218 11:05:48.077160 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-xhw45" podUID="bdeec125-ec5f-4e1e-9801-c884f349294d" Dec 18 11:05:48.324754 containerd[1520]: time="2025-12-18T11:05:48.324562930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:48.325696 containerd[1520]: time="2025-12-18T11:05:48.325594256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 18 11:05:48.325696 containerd[1520]: time="2025-12-18T11:05:48.325658137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:48.326278 kubelet[2695]: E1218 11:05:48.325807 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:48.326278 kubelet[2695]: E1218 11:05:48.325883 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 18 11:05:48.326278 kubelet[2695]: E1218 11:05:48.326024 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:48.328205 containerd[1520]: time="2025-12-18T11:05:48.328166554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 18 11:05:48.511935 containerd[1520]: time="2025-12-18T11:05:48.511869456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:48.513165 containerd[1520]: time="2025-12-18T11:05:48.513044704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 18 11:05:48.513165 containerd[1520]: time="2025-12-18T11:05:48.513111784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:48.513404 kubelet[2695]: E1218 11:05:48.513290 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:48.513404 kubelet[2695]: E1218 11:05:48.513384 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 18 11:05:48.513534 kubelet[2695]: E1218 11:05:48.513494 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tdf8q_calico-system(de13faa1-4005-4e4c-bebe-9b34acc642ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:48.515463 kubelet[2695]: E1218 11:05:48.514792 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tdf8q" podUID="de13faa1-4005-4e4c-bebe-9b34acc642ce" Dec 18 11:05:48.858768 containerd[1520]: time="2025-12-18T11:05:48.858058999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:49.103682 containerd[1520]: time="2025-12-18T11:05:49.103642016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:49.104680 containerd[1520]: time="2025-12-18T11:05:49.104649102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:49.104772 containerd[1520]: time="2025-12-18T11:05:49.104733783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:49.105143 kubelet[2695]: E1218 11:05:49.104878 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:49.105143 kubelet[2695]: E1218 11:05:49.104927 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:49.105143 kubelet[2695]: E1218 11:05:49.105056 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8djv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68744c49b-dflk9_calico-apiserver(fc3f310c-1439-4243-ade3-cd849e5460ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:49.106247 kubelet[2695]: E1218 11:05:49.106215 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68744c49b-dflk9" podUID="fc3f310c-1439-4243-ade3-cd849e5460ff" Dec 18 11:05:49.858752 containerd[1520]: time="2025-12-18T11:05:49.858507788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 18 11:05:50.066872 containerd[1520]: time="2025-12-18T11:05:50.066826767Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:50.068074 containerd[1520]: time="2025-12-18T11:05:50.068024495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 18 11:05:50.068194 containerd[1520]: time="2025-12-18T11:05:50.068101175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:50.068281 kubelet[2695]: E1218 11:05:50.068247 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:50.068326 kubelet[2695]: E1218 11:05:50.068292 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 18 11:05:50.068459 kubelet[2695]: E1218 11:05:50.068409 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg7gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54fb4cdc46-f2s62_calico-apiserver(68f46b97-44ea-43d3-8c4b-06c74ab4d137): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:50.069936 kubelet[2695]: E1218 11:05:50.069881 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54fb4cdc46-f2s62" podUID="68f46b97-44ea-43d3-8c4b-06c74ab4d137" Dec 18 11:05:51.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-12292-10.0.0.27:22-10.0.0.1:59334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:51.018145 systemd[1]: Started sshd@22-12292-10.0.0.27:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Dec 18 11:05:51.019735 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 18 11:05:51.019800 kernel: audit: type=1130 audit(1766055951.017:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-12292-10.0.0.27:22-10.0.0.1:59334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:51.068000 audit[5343]: AUDIT1101 pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.069317 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:51.072739 kernel: audit: type=1101 audit(1766055951.068:842): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.072000 audit[5343]: AUDIT1103 pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.074049 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:51.072000 audit[5343]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1af6140 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:51.078339 kernel: audit: type=1103 audit(1766055951.072:843): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.078366 kernel: audit: type=1006 audit(1766055951.072:844): pid=5343 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 18 11:05:51.078390 kernel: audit: type=1300 audit(1766055951.072:844): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1af6140 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:51.079818 systemd-logind[1499]: New session '24' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:51.072000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:51.083051 kernel: audit: type=1327 audit(1766055951.072:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:51.087906 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 18 11:05:51.090000 audit[5343]: AUDIT1105 pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.093000 audit[5347]: AUDIT1103 pid=5347 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.098812 kernel: audit: type=1105 audit(1766055951.090:845): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.098868 kernel: audit: type=1103 audit(1766055951.093:846): pid=5347 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.190743 sshd[5347]: Connection closed by 10.0.0.1 port 59334 Dec 18 11:05:51.191062 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:51.190000 audit[5343]: AUDIT1106 pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.195025 systemd[1]: sshd@22-12292-10.0.0.27:22-10.0.0.1:59334.service: Deactivated successfully. Dec 18 11:05:51.190000 audit[5343]: AUDIT1104 pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.197991 systemd[1]: session-24.scope: Deactivated successfully. Dec 18 11:05:51.198377 kernel: audit: type=1106 audit(1766055951.190:847): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-12292-10.0.0.27:22-10.0.0.1:59334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:51.198546 kernel: audit: type=1104 audit(1766055951.190:848): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:51.199629 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. Dec 18 11:05:51.201759 systemd-logind[1499]: Removed session 24. Dec 18 11:05:51.857934 kubelet[2695]: E1218 11:05:51.856973 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 18 11:05:51.858528 containerd[1520]: time="2025-12-18T11:05:51.858464139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 18 11:05:52.069514 containerd[1520]: time="2025-12-18T11:05:52.069436306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:52.070698 containerd[1520]: time="2025-12-18T11:05:52.070587433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 18 11:05:52.070698 containerd[1520]: time="2025-12-18T11:05:52.070660233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:52.071070 kubelet[2695]: E1218 11:05:52.070984 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:52.071070 kubelet[2695]: E1218 11:05:52.071031 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 18 11:05:52.071670 kubelet[2695]: E1218 11:05:52.071600 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2r7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-779d74d86-dfx2b_calico-system(91434238-f80e-42f1-bb48-55395c65ff33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:52.072829 kubelet[2695]: E1218 11:05:52.072788 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-779d74d86-dfx2b" podUID="91434238-f80e-42f1-bb48-55395c65ff33" Dec 18 11:05:53.858649 containerd[1520]: time="2025-12-18T11:05:53.858596458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 18 11:05:54.078897 containerd[1520]: time="2025-12-18T11:05:54.078821133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 18 11:05:54.079842 containerd[1520]: time="2025-12-18T11:05:54.079800139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 18 11:05:54.079925 containerd[1520]: time="2025-12-18T11:05:54.079882339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 18 11:05:54.080126 kubelet[2695]: E1218 11:05:54.080051 2695 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:54.080126 kubelet[2695]: E1218 11:05:54.080108 2695 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 18 11:05:54.080473 kubelet[2695]: E1218 11:05:54.080298 2695 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85t57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6j22f_calico-system(400efbcf-c6cb-4177-bb40-b857f3dc9989): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 18 11:05:54.081490 kubelet[2695]: E1218 11:05:54.081453 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6j22f" podUID="400efbcf-c6cb-4177-bb40-b857f3dc9989" Dec 18 11:05:56.203235 systemd[1]: Started sshd@23-4104-10.0.0.27:22-10.0.0.1:59336.service - OpenSSH per-connection server daemon (10.0.0.1:59336). Dec 18 11:05:56.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-4104-10.0.0.27:22-10.0.0.1:59336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:56.207586 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 18 11:05:56.207688 kernel: audit: type=1130 audit(1766055956.202:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-4104-10.0.0.27:22-10.0.0.1:59336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:56.270000 audit[5361]: AUDIT1101 pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.270960 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 59336 ssh2: RSA SHA256:P9m5ZrxwlYHOLfuNA/rncfZCcif33Yn8DcoMH8tt3gY Dec 18 11:05:56.273929 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 18 11:05:56.272000 audit[5361]: AUDIT1103 pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.276926 kernel: audit: type=1101 audit(1766055956.270:851): pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.277049 kernel: audit: type=1103 audit(1766055956.272:852): pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.278851 kernel: audit: type=1006 audit(1766055956.272:853): pid=5361 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 18 11:05:56.272000 audit[5361]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6776aa0 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:56.282330 kernel: audit: type=1300 audit(1766055956.272:853): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6776aa0 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 18 11:05:56.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:56.283887 kernel: audit: type=1327 audit(1766055956.272:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 18 11:05:56.285451 systemd-logind[1499]: New session '25' of user 'core' with class 'user' and type 'tty'. Dec 18 11:05:56.289956 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 18 11:05:56.292000 audit[5361]: AUDIT1105 pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.297000 audit[5365]: AUDIT1103 pid=5365 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.302537 kernel: audit: type=1105 audit(1766055956.292:854): pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.302686 kernel: audit: type=1103 audit(1766055956.297:855): pid=5365 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.424417 sshd[5365]: Connection closed by 10.0.0.1 port 59336 Dec 18 11:05:56.425250 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Dec 18 11:05:56.425000 audit[5361]: AUDIT1106 pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.425000 audit[5361]: AUDIT1104 pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.434754 kernel: audit: type=1106 audit(1766055956.425:856): pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.434828 kernel: audit: type=1104 audit(1766055956.425:857): pid=5361 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 18 11:05:56.435155 systemd[1]: sshd@23-4104-10.0.0.27:22-10.0.0.1:59336.service: Deactivated successfully. Dec 18 11:05:56.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-4104-10.0.0.27:22-10.0.0.1:59336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 18 11:05:56.438451 systemd[1]: session-25.scope: Deactivated successfully. Dec 18 11:05:56.440925 systemd-logind[1499]: Session 25 logged out. Waiting for processes to exit. Dec 18 11:05:56.442269 systemd-logind[1499]: Removed session 25. Dec 18 11:05:57.856900 kubelet[2695]: E1218 11:05:57.856803 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"