Dec 12 17:20:24.387628 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:20:24.387652 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:20:24.387662 kernel: KASLR enabled Dec 12 17:20:24.387668 kernel: efi: EFI v2.7 by EDK II Dec 12 17:20:24.387673 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Dec 12 17:20:24.387679 kernel: random: crng init done Dec 12 17:20:24.387686 kernel: secureboot: Secure boot disabled Dec 12 17:20:24.387692 kernel: ACPI: Early table checksum verification disabled Dec 12 17:20:24.387698 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Dec 12 17:20:24.387705 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:20:24.387712 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387718 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387724 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387730 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387739 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387745 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387752 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387758 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387765 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387771 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:20:24.387777 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 17:20:24.387784 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 12 17:20:24.387790 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:20:24.387798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Dec 12 17:20:24.387804 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Dec 12 17:20:24.387810 kernel: Zone ranges: Dec 12 17:20:24.387817 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:20:24.387823 kernel: DMA32 empty Dec 12 17:20:24.387829 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 12 17:20:24.387836 kernel: Device empty Dec 12 17:20:24.387842 kernel: Movable zone start for each node Dec 12 17:20:24.387848 kernel: Early memory node ranges Dec 12 17:20:24.387854 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Dec 12 17:20:24.387861 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Dec 12 17:20:24.387867 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Dec 12 17:20:24.387875 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Dec 12 17:20:24.387881 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Dec 12 17:20:24.387888 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Dec 12 17:20:24.387894 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 12 17:20:24.387900 kernel: psci: probing for conduit method from ACPI. Dec 12 17:20:24.387910 kernel: psci: PSCIv1.3 detected in firmware. Dec 12 17:20:24.387918 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:20:24.387925 kernel: psci: Trusted OS migration not required Dec 12 17:20:24.387931 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:20:24.387938 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:20:24.387945 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 12 17:20:24.387952 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 12 17:20:24.387959 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Dec 12 17:20:24.387965 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Dec 12 17:20:24.387973 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:20:24.387980 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:20:24.387987 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:20:24.387994 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:20:24.388000 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:20:24.388007 kernel: CPU features: detected: Spectre-v4 Dec 12 17:20:24.388014 kernel: CPU features: detected: Spectre-BHB Dec 12 17:20:24.388020 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:20:24.388027 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:20:24.388034 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:20:24.388041 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:20:24.388049 kernel: alternatives: applying boot alternatives Dec 12 17:20:24.388057 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:20:24.388064 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 12 17:20:24.388071 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 17:20:24.388077 kernel: Fallback order for Node 0: 0 Dec 12 17:20:24.388084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Dec 12 17:20:24.388091 kernel: Policy zone: Normal Dec 12 17:20:24.388097 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:20:24.388104 kernel: software IO TLB: area num 4. Dec 12 17:20:24.388111 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 12 17:20:24.388119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:20:24.388126 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:20:24.388133 kernel: rcu: RCU event tracing is enabled. Dec 12 17:20:24.388140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:20:24.388147 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:20:24.388154 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:20:24.388161 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:20:24.388168 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:20:24.388174 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:20:24.388181 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:20:24.388188 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:20:24.388196 kernel: GICv3: 256 SPIs implemented Dec 12 17:20:24.388203 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:20:24.388209 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:20:24.388216 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:20:24.388223 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:20:24.388230 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:20:24.388236 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:20:24.388243 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:20:24.388250 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:20:24.388257 kernel: GICv3: using LPI property table @0x0000000100130000 Dec 12 17:20:24.388264 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Dec 12 17:20:24.388270 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:20:24.388278 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:20:24.388285 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:20:24.388292 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:20:24.388299 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:20:24.388307 kernel: arm-pv: using stolen time PV Dec 12 17:20:24.388314 kernel: Console: colour dummy device 80x25 Dec 12 17:20:24.388321 kernel: ACPI: Core revision 20240827 Dec 12 17:20:24.388329 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:20:24.388354 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:20:24.388361 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:20:24.388368 kernel: landlock: Up and running. Dec 12 17:20:24.388375 kernel: SELinux: Initializing. Dec 12 17:20:24.388383 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:20:24.388390 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:20:24.388406 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:20:24.388416 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:20:24.388423 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:20:24.388431 kernel: Remapping and enabling EFI services. Dec 12 17:20:24.388438 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:20:24.388445 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:20:24.388452 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:20:24.388459 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Dec 12 17:20:24.388467 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:20:24.388475 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:20:24.388483 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:20:24.388494 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:20:24.388503 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Dec 12 17:20:24.388511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:20:24.388518 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:20:24.388526 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:20:24.388533 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:20:24.388542 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Dec 12 17:20:24.388549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:20:24.388556 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:20:24.388564 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:20:24.388571 kernel: SMP: Total of 4 processors activated. Dec 12 17:20:24.388578 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:20:24.388587 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:20:24.388595 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:20:24.388602 kernel: CPU features: detected: Common not Private translations Dec 12 17:20:24.388610 kernel: CPU features: detected: CRC32 instructions Dec 12 17:20:24.388617 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:20:24.388625 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:20:24.388632 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:20:24.388641 kernel: CPU features: detected: Privileged Access Never Dec 12 17:20:24.388648 kernel: CPU features: detected: RAS Extension Support Dec 12 17:20:24.388656 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:20:24.388663 kernel: alternatives: applying system-wide alternatives Dec 12 17:20:24.388671 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:20:24.388679 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 429936K reserved, 16384K cma-reserved) Dec 12 17:20:24.388686 kernel: devtmpfs: initialized Dec 12 17:20:24.388695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:20:24.388703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:20:24.388710 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:20:24.388717 kernel: 0 pages in range for non-PLT usage Dec 12 17:20:24.388725 kernel: 515184 pages in range for PLT usage Dec 12 17:20:24.388732 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:20:24.388739 kernel: SMBIOS 3.0.0 present. Dec 12 17:20:24.388748 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 12 17:20:24.388756 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:20:24.388763 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:20:24.388771 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:20:24.388779 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:20:24.388786 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:20:24.388793 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:20:24.388802 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Dec 12 17:20:24.388810 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:20:24.388817 kernel: cpuidle: using governor menu Dec 12 17:20:24.388825 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:20:24.388832 kernel: ASID allocator initialised with 32768 entries Dec 12 17:20:24.388840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:20:24.388847 kernel: Serial: AMBA PL011 UART driver Dec 12 17:20:24.388854 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:20:24.388863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:20:24.388871 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:20:24.388878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:20:24.388885 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:20:24.388893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:20:24.388901 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:20:24.388908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:20:24.388916 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:20:24.388924 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:20:24.388931 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:20:24.388939 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:20:24.388946 kernel: ACPI: Interpreter enabled Dec 12 17:20:24.388954 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:20:24.388961 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:20:24.388969 kernel: ACPI: CPU0 has been hot-added Dec 12 17:20:24.388977 kernel: ACPI: CPU1 has been hot-added Dec 12 17:20:24.388984 kernel: ACPI: CPU2 has been hot-added Dec 12 17:20:24.388992 kernel: ACPI: CPU3 has been hot-added Dec 12 17:20:24.388999 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:20:24.389007 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:20:24.389014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:20:24.389172 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:20:24.389260 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:20:24.389340 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:20:24.389455 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:20:24.389543 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:20:24.389553 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:20:24.389564 kernel: PCI host bridge to bus 0000:00 Dec 12 17:20:24.389655 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:20:24.389728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:20:24.389798 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:20:24.389868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:20:24.389962 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:20:24.390054 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.390156 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Dec 12 17:20:24.390236 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:20:24.390315 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Dec 12 17:20:24.390394 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 12 17:20:24.390504 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.390588 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Dec 12 17:20:24.390666 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:20:24.390746 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Dec 12 17:20:24.390837 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.390916 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Dec 12 17:20:24.390999 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:20:24.391078 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Dec 12 17:20:24.391156 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 12 17:20:24.391241 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.391320 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Dec 12 17:20:24.391406 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:20:24.391496 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 12 17:20:24.391583 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.391663 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Dec 12 17:20:24.391743 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:20:24.391822 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Dec 12 17:20:24.391902 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 12 17:20:24.391992 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.392072 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Dec 12 17:20:24.392152 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:20:24.392230 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Dec 12 17:20:24.392310 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 12 17:20:24.392425 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.392513 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Dec 12 17:20:24.392593 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:20:24.392678 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.392759 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Dec 12 17:20:24.392838 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:20:24.392926 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.393006 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Dec 12 17:20:24.393086 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:20:24.393171 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.393250 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Dec 12 17:20:24.393331 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:20:24.393425 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.393508 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Dec 12 17:20:24.393587 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:20:24.393679 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.393757 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Dec 12 17:20:24.393839 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:20:24.393924 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.394005 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Dec 12 17:20:24.394085 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:20:24.394172 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.394256 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Dec 12 17:20:24.394346 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:20:24.394468 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.394558 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Dec 12 17:20:24.394647 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:20:24.394761 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.394849 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Dec 12 17:20:24.394927 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:20:24.395012 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.395092 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Dec 12 17:20:24.395169 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:20:24.395253 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.395334 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Dec 12 17:20:24.395431 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:20:24.395513 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Dec 12 17:20:24.395591 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:20:24.395676 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.395754 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Dec 12 17:20:24.395836 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:20:24.395916 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Dec 12 17:20:24.395994 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:20:24.396084 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.396163 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Dec 12 17:20:24.396243 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:20:24.396320 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Dec 12 17:20:24.396431 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:20:24.396531 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.396611 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Dec 12 17:20:24.396688 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:20:24.396766 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Dec 12 17:20:24.396848 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:20:24.396933 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.397012 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Dec 12 17:20:24.397091 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:20:24.397169 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Dec 12 17:20:24.397247 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:20:24.397333 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.397422 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Dec 12 17:20:24.397505 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:20:24.397584 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Dec 12 17:20:24.397663 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:20:24.397748 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.397829 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Dec 12 17:20:24.397907 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:20:24.397984 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Dec 12 17:20:24.398062 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:20:24.398146 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.398228 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Dec 12 17:20:24.398308 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:20:24.398387 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Dec 12 17:20:24.398478 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:20:24.398566 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.398645 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Dec 12 17:20:24.398725 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:20:24.398803 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Dec 12 17:20:24.398881 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:20:24.398964 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.399042 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Dec 12 17:20:24.399126 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:20:24.399223 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Dec 12 17:20:24.399301 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:20:24.399387 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.399479 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Dec 12 17:20:24.399558 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:20:24.399635 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Dec 12 17:20:24.399715 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:20:24.399812 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.399898 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Dec 12 17:20:24.399978 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:20:24.400057 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Dec 12 17:20:24.400135 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:20:24.400220 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.400298 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Dec 12 17:20:24.400396 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:20:24.400494 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Dec 12 17:20:24.400575 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:20:24.400661 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.400740 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Dec 12 17:20:24.400818 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:20:24.400896 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Dec 12 17:20:24.400974 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:20:24.401062 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.401141 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Dec 12 17:20:24.401219 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:20:24.401297 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Dec 12 17:20:24.401375 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:20:24.401482 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:20:24.401567 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Dec 12 17:20:24.401644 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:20:24.401726 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Dec 12 17:20:24.401806 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:20:24.401894 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 17:20:24.402471 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Dec 12 17:20:24.402562 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:20:24.402643 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 17:20:24.403410 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 17:20:24.403510 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Dec 12 17:20:24.403605 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 12 17:20:24.403691 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Dec 12 17:20:24.403775 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 12 17:20:24.403865 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:20:24.403948 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 12 17:20:24.404043 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:20:24.404135 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Dec 12 17:20:24.404222 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 12 17:20:24.404310 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Dec 12 17:20:24.404416 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Dec 12 17:20:24.404501 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 12 17:20:24.404583 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 12 17:20:24.404668 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:20:24.404748 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:20:24.404831 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 12 17:20:24.404911 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 12 17:20:24.404991 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 12 17:20:24.405072 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 17:20:24.405151 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:20:24.405229 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:20:24.405311 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 17:20:24.405392 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 12 17:20:24.405485 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 12 17:20:24.405570 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 17:20:24.405650 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:20:24.405730 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:20:24.405813 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 17:20:24.405896 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:20:24.405978 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:20:24.406061 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 17:20:24.406142 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:20:24.406222 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:20:24.406310 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 17:20:24.406391 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:20:24.406486 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:20:24.406577 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 17:20:24.406660 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:20:24.406758 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:20:24.406848 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 12 17:20:24.406934 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:20:24.407017 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:20:24.407119 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 12 17:20:24.407204 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:20:24.407284 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:20:24.407370 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 12 17:20:24.407465 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:20:24.407545 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:20:24.407632 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 12 17:20:24.407712 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:20:24.407792 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:20:24.407877 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 12 17:20:24.407956 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:20:24.408049 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:20:24.408135 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 12 17:20:24.408219 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:20:24.408302 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:20:24.408414 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 12 17:20:24.408501 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:20:24.408581 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:20:24.408663 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 12 17:20:24.408760 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:20:24.408842 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:20:24.408925 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 12 17:20:24.409006 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:20:24.409084 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:20:24.409167 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 12 17:20:24.409252 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:20:24.409332 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:20:24.409427 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 12 17:20:24.409509 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:20:24.409588 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:20:24.409673 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 12 17:20:24.409759 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:20:24.409874 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:20:24.409960 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 12 17:20:24.410043 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:20:24.410122 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:20:24.410211 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 12 17:20:24.410292 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:20:24.410373 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:20:24.410474 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 12 17:20:24.410557 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:20:24.410638 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:20:24.410722 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 12 17:20:24.410802 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:20:24.410889 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:20:24.410972 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 12 17:20:24.411051 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:20:24.411129 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:20:24.411215 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 12 17:20:24.411294 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:20:24.411373 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:20:24.411467 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 12 17:20:24.411548 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:20:24.411630 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:20:24.411712 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 12 17:20:24.411791 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:20:24.411870 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:20:24.411951 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 12 17:20:24.412030 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:20:24.412110 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:20:24.412192 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 12 17:20:24.412271 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:20:24.412366 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:20:24.412480 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 12 17:20:24.412571 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:20:24.412653 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:20:24.412736 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 12 17:20:24.412819 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:20:24.412899 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:20:24.412983 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 12 17:20:24.413065 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 12 17:20:24.413145 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 12 17:20:24.413223 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 12 17:20:24.413306 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 12 17:20:24.413385 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 12 17:20:24.413486 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 12 17:20:24.413567 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 12 17:20:24.413652 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 12 17:20:24.413732 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 12 17:20:24.413813 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 12 17:20:24.413892 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 12 17:20:24.413972 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 12 17:20:24.414050 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 12 17:20:24.414133 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 12 17:20:24.414212 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 12 17:20:24.414292 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 12 17:20:24.414371 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 12 17:20:24.414462 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Dec 12 17:20:24.414541 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Dec 12 17:20:24.414625 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Dec 12 17:20:24.414703 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Dec 12 17:20:24.414784 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Dec 12 17:20:24.414863 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Dec 12 17:20:24.414944 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Dec 12 17:20:24.415023 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Dec 12 17:20:24.415105 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Dec 12 17:20:24.415184 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Dec 12 17:20:24.415264 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Dec 12 17:20:24.415342 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Dec 12 17:20:24.415431 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Dec 12 17:20:24.415510 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Dec 12 17:20:24.415591 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Dec 12 17:20:24.415673 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Dec 12 17:20:24.415753 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Dec 12 17:20:24.415833 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Dec 12 17:20:24.415914 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Dec 12 17:20:24.415992 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Dec 12 17:20:24.416071 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Dec 12 17:20:24.416154 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Dec 12 17:20:24.416235 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Dec 12 17:20:24.416313 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Dec 12 17:20:24.416422 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Dec 12 17:20:24.416507 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Dec 12 17:20:24.416590 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Dec 12 17:20:24.416672 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Dec 12 17:20:24.416752 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Dec 12 17:20:24.416831 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Dec 12 17:20:24.416911 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Dec 12 17:20:24.416989 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Dec 12 17:20:24.417069 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Dec 12 17:20:24.417150 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Dec 12 17:20:24.417232 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Dec 12 17:20:24.417310 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Dec 12 17:20:24.417390 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Dec 12 17:20:24.417488 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Dec 12 17:20:24.417571 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Dec 12 17:20:24.417651 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Dec 12 17:20:24.417769 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Dec 12 17:20:24.417850 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Dec 12 17:20:24.417933 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Dec 12 17:20:24.418014 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Dec 12 17:20:24.418095 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Dec 12 17:20:24.419112 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Dec 12 17:20:24.419199 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Dec 12 17:20:24.419278 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Dec 12 17:20:24.419358 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Dec 12 17:20:24.419449 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:20:24.419531 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Dec 12 17:20:24.419609 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:20:24.419688 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Dec 12 17:20:24.419769 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:20:24.419849 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Dec 12 17:20:24.419927 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:20:24.420007 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Dec 12 17:20:24.420085 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:20:24.420165 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Dec 12 17:20:24.420245 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:20:24.420326 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Dec 12 17:20:24.420447 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:20:24.420535 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Dec 12 17:20:24.420618 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:20:24.420701 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Dec 12 17:20:24.420798 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:20:24.420886 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Dec 12 17:20:24.420965 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:20:24.421044 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Dec 12 17:20:24.421125 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:20:24.421209 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Dec 12 17:20:24.421288 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:20:24.421372 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Dec 12 17:20:24.421468 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:20:24.421551 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Dec 12 17:20:24.421633 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:20:24.421715 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Dec 12 17:20:24.421799 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:20:24.421881 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Dec 12 17:20:24.421961 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.422039 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.422121 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Dec 12 17:20:24.422200 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.422280 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.422362 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Dec 12 17:20:24.422455 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.422538 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.422622 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Dec 12 17:20:24.422708 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.422803 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.422887 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Dec 12 17:20:24.422968 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.423046 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.423127 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Dec 12 17:20:24.423205 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.423283 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.423367 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Dec 12 17:20:24.423456 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.423535 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.423616 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Dec 12 17:20:24.423694 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.423772 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.423865 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Dec 12 17:20:24.423948 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.424028 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.424108 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Dec 12 17:20:24.424188 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.424266 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.424362 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Dec 12 17:20:24.424460 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.424557 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.424647 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Dec 12 17:20:24.424728 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.424806 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.424886 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Dec 12 17:20:24.424969 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.425048 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.425129 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Dec 12 17:20:24.425208 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.425286 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.425365 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Dec 12 17:20:24.425461 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.425541 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.425624 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Dec 12 17:20:24.425703 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.425783 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.425865 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Dec 12 17:20:24.425947 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.426033 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.426115 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Dec 12 17:20:24.426196 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.426274 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.426354 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:20:24.426446 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:20:24.426532 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:20:24.426612 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:20:24.426692 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:20:24.426774 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:20:24.426853 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:20:24.426934 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:20:24.427015 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:20:24.427097 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:20:24.427179 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:20:24.427262 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:20:24.427345 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:20:24.427442 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:20:24.427542 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:20:24.427626 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.427712 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.427794 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.427874 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.427955 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428034 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.428114 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428194 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.428277 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428378 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.428504 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428592 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.428678 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428776 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.428860 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.428947 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429030 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429108 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429190 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429268 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429349 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429448 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429533 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429613 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429702 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429786 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.429870 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.429952 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.430035 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.430115 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.430199 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.430297 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.430381 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.430490 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.430577 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:20:24.430660 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:20:24.430748 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 12 17:20:24.430833 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:20:24.430916 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 12 17:20:24.431002 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:20:24.431082 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:20:24.431163 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:20:24.431248 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 12 17:20:24.431326 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:20:24.431425 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:20:24.431508 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:20:24.431595 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 12 17:20:24.431679 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 12 17:20:24.431759 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:20:24.431845 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:20:24.431940 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:20:24.432026 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 12 17:20:24.432106 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:20:24.432185 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:20:24.432265 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:20:24.432367 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 12 17:20:24.432482 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 12 17:20:24.432565 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:20:24.432644 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:20:24.432722 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:20:24.432807 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 12 17:20:24.432901 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 12 17:20:24.432987 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:20:24.433067 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:20:24.433149 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:20:24.433230 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:20:24.433309 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:20:24.433388 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:20:24.433483 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:20:24.433563 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:20:24.433642 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:20:24.433721 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:20:24.433812 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:20:24.433898 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:20:24.433979 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:20:24.434059 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:20:24.434138 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:20:24.434218 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:20:24.434298 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:20:24.434379 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:20:24.434534 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:20:24.434625 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:20:24.434707 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:20:24.434789 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:20:24.434873 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:20:24.434953 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:20:24.435035 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:20:24.435115 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:20:24.435195 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:20:24.435282 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:20:24.435364 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:20:24.435461 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:20:24.435565 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:20:24.435647 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:20:24.435731 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:20:24.435814 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:20:24.435896 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Dec 12 17:20:24.435975 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:20:24.436056 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:20:24.436138 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Dec 12 17:20:24.436219 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:20:24.436308 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:20:24.436419 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Dec 12 17:20:24.436506 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Dec 12 17:20:24.436593 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:20:24.436677 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:20:24.436757 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Dec 12 17:20:24.436838 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Dec 12 17:20:24.436918 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:20:24.437001 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:20:24.437082 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Dec 12 17:20:24.437161 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Dec 12 17:20:24.437241 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:20:24.437325 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:20:24.437428 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Dec 12 17:20:24.437513 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Dec 12 17:20:24.437593 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:20:24.437677 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:20:24.437757 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Dec 12 17:20:24.437838 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Dec 12 17:20:24.437921 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:20:24.438007 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:20:24.438088 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Dec 12 17:20:24.438168 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Dec 12 17:20:24.438249 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:20:24.438332 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:20:24.438427 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Dec 12 17:20:24.438528 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Dec 12 17:20:24.438608 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:20:24.438693 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:20:24.438773 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Dec 12 17:20:24.438851 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Dec 12 17:20:24.438928 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:20:24.439009 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:20:24.439091 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Dec 12 17:20:24.439170 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Dec 12 17:20:24.439248 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:20:24.439330 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:20:24.439420 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Dec 12 17:20:24.439507 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Dec 12 17:20:24.439592 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:20:24.439679 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:20:24.439763 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Dec 12 17:20:24.439844 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Dec 12 17:20:24.439926 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:20:24.440020 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:20:24.440102 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Dec 12 17:20:24.440183 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Dec 12 17:20:24.440279 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:20:24.440375 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:20:24.440470 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Dec 12 17:20:24.440553 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Dec 12 17:20:24.440631 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:20:24.440714 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:20:24.440798 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Dec 12 17:20:24.440877 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Dec 12 17:20:24.440954 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:20:24.441036 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:20:24.441115 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Dec 12 17:20:24.441194 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Dec 12 17:20:24.441288 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:20:24.441370 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:20:24.441485 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:20:24.441560 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:20:24.441645 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 12 17:20:24.441720 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:20:24.441807 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 12 17:20:24.441881 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:20:24.441962 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 12 17:20:24.442035 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:20:24.442116 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 12 17:20:24.442192 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:20:24.442274 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 12 17:20:24.442349 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:20:24.442471 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 12 17:20:24.442551 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:20:24.442645 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 12 17:20:24.442722 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:20:24.442803 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 12 17:20:24.442877 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:20:24.442961 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 12 17:20:24.443037 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:20:24.443121 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Dec 12 17:20:24.443195 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:20:24.443275 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Dec 12 17:20:24.443349 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:20:24.443447 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Dec 12 17:20:24.443526 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:20:24.443608 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Dec 12 17:20:24.443683 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:20:24.443764 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Dec 12 17:20:24.443839 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:20:24.443922 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Dec 12 17:20:24.443995 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:20:24.444081 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Dec 12 17:20:24.444155 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:20:24.444235 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Dec 12 17:20:24.444312 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:20:24.444451 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Dec 12 17:20:24.444531 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:20:24.444613 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Dec 12 17:20:24.444689 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Dec 12 17:20:24.444767 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:20:24.444847 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Dec 12 17:20:24.444921 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Dec 12 17:20:24.445012 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:20:24.445095 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Dec 12 17:20:24.445173 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Dec 12 17:20:24.445247 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:20:24.445330 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Dec 12 17:20:24.445418 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Dec 12 17:20:24.445503 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:20:24.445586 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Dec 12 17:20:24.445667 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Dec 12 17:20:24.445742 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:20:24.445826 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Dec 12 17:20:24.445903 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Dec 12 17:20:24.445978 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:20:24.446066 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Dec 12 17:20:24.446145 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Dec 12 17:20:24.446237 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:20:24.446326 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Dec 12 17:20:24.446412 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Dec 12 17:20:24.446504 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:20:24.446597 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 12 17:20:24.446673 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Dec 12 17:20:24.446747 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:20:24.446831 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Dec 12 17:20:24.446906 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Dec 12 17:20:24.446994 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:20:24.447081 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Dec 12 17:20:24.447156 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Dec 12 17:20:24.447230 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:20:24.447310 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Dec 12 17:20:24.447388 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Dec 12 17:20:24.447489 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:20:24.447574 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Dec 12 17:20:24.447648 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Dec 12 17:20:24.447722 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:20:24.447803 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Dec 12 17:20:24.447877 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Dec 12 17:20:24.447952 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:20:24.448038 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Dec 12 17:20:24.448130 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Dec 12 17:20:24.448215 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:20:24.448226 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:20:24.448234 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:20:24.448245 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:20:24.448254 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:20:24.448262 kernel: iommu: Default domain type: Translated Dec 12 17:20:24.448270 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:20:24.448278 kernel: efivars: Registered efivars operations Dec 12 17:20:24.448286 kernel: vgaarb: loaded Dec 12 17:20:24.448294 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:20:24.448302 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:20:24.448311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:20:24.448325 kernel: pnp: PnP ACPI init Dec 12 17:20:24.448468 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:20:24.448481 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:20:24.448489 kernel: NET: Registered PF_INET protocol family Dec 12 17:20:24.448497 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:20:24.448509 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 12 17:20:24.448517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:20:24.448525 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 17:20:24.448534 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 17:20:24.448542 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 12 17:20:24.448550 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:20:24.448558 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:20:24.448567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:20:24.448663 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 12 17:20:24.448675 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:20:24.448684 kernel: kvm [1]: HYP mode not available Dec 12 17:20:24.448692 kernel: Initialise system trusted keyrings Dec 12 17:20:24.448700 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 12 17:20:24.448708 kernel: Key type asymmetric registered Dec 12 17:20:24.448718 kernel: Asymmetric key parser 'x509' registered Dec 12 17:20:24.448726 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:20:24.448734 kernel: io scheduler mq-deadline registered Dec 12 17:20:24.448742 kernel: io scheduler kyber registered Dec 12 17:20:24.448750 kernel: io scheduler bfq registered Dec 12 17:20:24.448759 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:20:24.448844 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Dec 12 17:20:24.448926 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Dec 12 17:20:24.449022 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.449108 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Dec 12 17:20:24.449187 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Dec 12 17:20:24.449268 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.449354 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Dec 12 17:20:24.449449 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Dec 12 17:20:24.449535 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.449620 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Dec 12 17:20:24.449700 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Dec 12 17:20:24.449782 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.449866 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Dec 12 17:20:24.449947 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Dec 12 17:20:24.450028 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.450112 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Dec 12 17:20:24.450191 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Dec 12 17:20:24.450271 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.450366 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Dec 12 17:20:24.450470 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Dec 12 17:20:24.450555 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.450637 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Dec 12 17:20:24.450715 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Dec 12 17:20:24.450801 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.450812 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:20:24.450891 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Dec 12 17:20:24.450973 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Dec 12 17:20:24.451056 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.451140 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Dec 12 17:20:24.451219 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Dec 12 17:20:24.451302 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.451389 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Dec 12 17:20:24.451483 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Dec 12 17:20:24.451565 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.451648 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Dec 12 17:20:24.451733 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Dec 12 17:20:24.451812 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.451893 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Dec 12 17:20:24.451974 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Dec 12 17:20:24.452053 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.452135 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Dec 12 17:20:24.452215 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Dec 12 17:20:24.452293 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.452391 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Dec 12 17:20:24.452504 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Dec 12 17:20:24.452587 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.452674 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Dec 12 17:20:24.452756 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Dec 12 17:20:24.452834 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.452845 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 12 17:20:24.452924 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Dec 12 17:20:24.453003 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Dec 12 17:20:24.453083 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.453164 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Dec 12 17:20:24.453244 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Dec 12 17:20:24.453322 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.453416 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Dec 12 17:20:24.453503 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Dec 12 17:20:24.453586 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.453670 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Dec 12 17:20:24.453750 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Dec 12 17:20:24.453830 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.453913 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Dec 12 17:20:24.453993 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Dec 12 17:20:24.454073 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.454159 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Dec 12 17:20:24.454241 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Dec 12 17:20:24.454322 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.454414 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Dec 12 17:20:24.454509 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Dec 12 17:20:24.454589 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.454675 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Dec 12 17:20:24.454757 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Dec 12 17:20:24.454837 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.454848 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:20:24.454928 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Dec 12 17:20:24.455007 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Dec 12 17:20:24.455087 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.455169 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Dec 12 17:20:24.455248 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Dec 12 17:20:24.455326 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.455424 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Dec 12 17:20:24.455508 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Dec 12 17:20:24.455587 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.455673 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Dec 12 17:20:24.455753 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Dec 12 17:20:24.455832 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.455914 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Dec 12 17:20:24.455994 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Dec 12 17:20:24.456072 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.456156 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Dec 12 17:20:24.456235 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Dec 12 17:20:24.456314 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.456423 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Dec 12 17:20:24.456508 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Dec 12 17:20:24.456588 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.456673 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Dec 12 17:20:24.456753 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Dec 12 17:20:24.456832 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.456913 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Dec 12 17:20:24.456993 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Dec 12 17:20:24.457072 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:20:24.457083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:20:24.457093 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:20:24.457177 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Dec 12 17:20:24.457264 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 12 17:20:24.457276 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:20:24.457284 kernel: thunder_xcv, ver 1.0 Dec 12 17:20:24.457292 kernel: thunder_bgx, ver 1.0 Dec 12 17:20:24.457300 kernel: nicpf, ver 1.0 Dec 12 17:20:24.457309 kernel: nicvf, ver 1.0 Dec 12 17:20:24.457415 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:20:24.457497 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:20:23 UTC (1765560023) Dec 12 17:20:24.457508 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:20:24.457517 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:20:24.457525 kernel: watchdog: NMI not fully supported Dec 12 17:20:24.457536 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:20:24.457544 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:20:24.457551 kernel: Segment Routing with IPv6 Dec 12 17:20:24.457560 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:20:24.457568 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:20:24.457575 kernel: Key type dns_resolver registered Dec 12 17:20:24.457584 kernel: registered taskstats version 1 Dec 12 17:20:24.457593 kernel: Loading compiled-in X.509 certificates Dec 12 17:20:24.457602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:20:24.457610 kernel: Demotion targets for Node 0: null Dec 12 17:20:24.457618 kernel: Key type .fscrypt registered Dec 12 17:20:24.457626 kernel: Key type fscrypt-provisioning registered Dec 12 17:20:24.457634 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:20:24.457642 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:20:24.457651 kernel: ima: No architecture policies found Dec 12 17:20:24.457659 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:20:24.457667 kernel: clk: Disabling unused clocks Dec 12 17:20:24.457675 kernel: PM: genpd: Disabling unused power domains Dec 12 17:20:24.457683 kernel: Freeing unused kernel memory: 12416K Dec 12 17:20:24.457691 kernel: Run /init as init process Dec 12 17:20:24.457699 kernel: with arguments: Dec 12 17:20:24.457707 kernel: /init Dec 12 17:20:24.457716 kernel: with environment: Dec 12 17:20:24.457724 kernel: HOME=/ Dec 12 17:20:24.457732 kernel: TERM=linux Dec 12 17:20:24.457739 kernel: ACPI: bus type USB registered Dec 12 17:20:24.457747 kernel: usbcore: registered new interface driver usbfs Dec 12 17:20:24.457756 kernel: usbcore: registered new interface driver hub Dec 12 17:20:24.457763 kernel: usbcore: registered new device driver usb Dec 12 17:20:24.457853 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:20:24.457937 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 12 17:20:24.458020 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 17:20:24.458102 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:20:24.458186 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 12 17:20:24.458268 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 12 17:20:24.458380 kernel: hub 1-0:1.0: USB hub found Dec 12 17:20:24.458519 kernel: hub 1-0:1.0: 4 ports detected Dec 12 17:20:24.458624 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 17:20:24.458731 kernel: hub 2-0:1.0: USB hub found Dec 12 17:20:24.458820 kernel: hub 2-0:1.0: 4 ports detected Dec 12 17:20:24.458916 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 17:20:24.458999 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 12 17:20:24.459010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:20:24.459019 kernel: GPT:25804799 != 104857599 Dec 12 17:20:24.459027 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:20:24.459036 kernel: GPT:25804799 != 104857599 Dec 12 17:20:24.459045 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:20:24.459054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:20:24.459062 kernel: SCSI subsystem initialized Dec 12 17:20:24.459070 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:20:24.459079 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:20:24.459087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:20:24.459096 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:20:24.459105 kernel: raid6: neonx8 gen() 15776 MB/s Dec 12 17:20:24.459114 kernel: raid6: neonx4 gen() 15758 MB/s Dec 12 17:20:24.459123 kernel: raid6: neonx2 gen() 13245 MB/s Dec 12 17:20:24.459132 kernel: raid6: neonx1 gen() 10453 MB/s Dec 12 17:20:24.459140 kernel: raid6: int64x8 gen() 6846 MB/s Dec 12 17:20:24.459148 kernel: raid6: int64x4 gen() 7360 MB/s Dec 12 17:20:24.459157 kernel: raid6: int64x2 gen() 6077 MB/s Dec 12 17:20:24.459167 kernel: raid6: int64x1 gen() 5034 MB/s Dec 12 17:20:24.459176 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Dec 12 17:20:24.459184 kernel: raid6: .... xor() 12058 MB/s, rmw enabled Dec 12 17:20:24.459192 kernel: raid6: using neon recovery algorithm Dec 12 17:20:24.459201 kernel: xor: measuring software checksum speed Dec 12 17:20:24.459211 kernel: 8regs : 19200 MB/sec Dec 12 17:20:24.459219 kernel: 32regs : 21676 MB/sec Dec 12 17:20:24.459229 kernel: arm64_neon : 28070 MB/sec Dec 12 17:20:24.459237 kernel: xor: using function: arm64_neon (28070 MB/sec) Dec 12 17:20:24.459336 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 17:20:24.459348 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:20:24.459357 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (275) Dec 12 17:20:24.459366 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:20:24.459376 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:20:24.459386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:20:24.459394 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:20:24.459421 kernel: loop: module loaded Dec 12 17:20:24.459430 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:20:24.459438 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:20:24.459548 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 12 17:20:24.459564 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:20:24.459576 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:20:24.459586 systemd[1]: Detected virtualization kvm. Dec 12 17:20:24.459595 systemd[1]: Detected architecture arm64. Dec 12 17:20:24.459604 systemd[1]: Running in initrd. Dec 12 17:20:24.459614 systemd[1]: No hostname configured, using default hostname. Dec 12 17:20:24.459623 systemd[1]: Hostname set to . Dec 12 17:20:24.459632 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:20:24.459641 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:20:24.459650 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:20:24.459659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:20:24.459668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:20:24.459679 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:20:24.459688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:20:24.459698 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:20:24.459707 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:20:24.459716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:20:24.459727 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:20:24.459749 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:20:24.459758 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:20:24.459767 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:20:24.459776 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:20:24.459785 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:20:24.459794 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:20:24.459804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:20:24.459813 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:20:24.459822 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:20:24.459831 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:20:24.459840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:20:24.459849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:20:24.459858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:20:24.459868 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:20:24.459877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:20:24.459886 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:20:24.459896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:20:24.459905 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:20:24.459915 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:20:24.459924 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:20:24.459935 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:20:24.459944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:20:24.459954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:20:24.459964 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:20:24.459973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:20:24.459982 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:20:24.459991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:20:24.460023 systemd-journald[418]: Collecting audit messages is enabled. Dec 12 17:20:24.460048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:20:24.460057 kernel: Bridge firewalling registered Dec 12 17:20:24.460065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:20:24.460075 kernel: audit: type=1130 audit(1765560024.403:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:20:24.460094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:20:24.460103 kernel: audit: type=1130 audit(1765560024.412:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460113 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:20:24.460122 kernel: audit: type=1130 audit(1765560024.419:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:20:24.460140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:20:24.460150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:20:24.460160 kernel: audit: type=1130 audit(1765560024.433:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460168 kernel: audit: type=1334 audit(1765560024.435:6): prog-id=6 op=LOAD Dec 12 17:20:24.460177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:20:24.460186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:20:24.460195 kernel: audit: type=1130 audit(1765560024.442:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:20:24.460215 kernel: audit: type=1130 audit(1765560024.456:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.460223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:20:24.460233 systemd-journald[418]: Journal started Dec 12 17:20:24.460252 systemd-journald[418]: Runtime Journal (/run/log/journal/d8883f0c4de54136a01fead669ac4870) is 8M, max 319.5M, 311.5M free. Dec 12 17:20:24.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.435000 audit: BPF prog-id=6 op=LOAD Dec 12 17:20:24.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.399820 systemd-modules-load[420]: Inserted module 'br_netfilter' Dec 12 17:20:24.465143 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:20:24.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.468426 kernel: audit: type=1130 audit(1765560024.465:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.470060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:20:24.479285 systemd-tmpfiles[460]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:20:24.485113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:20:24.486595 systemd-resolved[437]: Positive Trust Anchors: Dec 12 17:20:24.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.491545 dracut-cmdline[449]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:20:24.486604 systemd-resolved[437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:20:24.498070 kernel: audit: type=1130 audit(1765560024.488:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.486607 systemd-resolved[437]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:20:24.486642 systemd-resolved[437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:20:24.509755 systemd-resolved[437]: Defaulting to hostname 'linux'. Dec 12 17:20:24.510724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:20:24.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.512821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:20:24.562462 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:20:24.573452 kernel: iscsi: registered transport (tcp) Dec 12 17:20:24.587442 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:20:24.587492 kernel: QLogic iSCSI HBA Driver Dec 12 17:20:24.608757 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:20:24.630527 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:20:24.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.632979 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:20:24.678150 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:20:24.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.681522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:20:24.683128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:20:24.714493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:20:24.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.716000 audit: BPF prog-id=7 op=LOAD Dec 12 17:20:24.716000 audit: BPF prog-id=8 op=LOAD Dec 12 17:20:24.717385 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:20:24.746877 systemd-udevd[699]: Using default interface naming scheme 'v257'. Dec 12 17:20:24.754640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:20:24.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.757919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:20:24.782767 dracut-pre-trigger[763]: rd.md=0: removing MD RAID activation Dec 12 17:20:24.788046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:20:24.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.790000 audit: BPF prog-id=9 op=LOAD Dec 12 17:20:24.791420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:20:24.807130 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:20:24.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.809657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:20:24.833573 systemd-networkd[815]: lo: Link UP Dec 12 17:20:24.833582 systemd-networkd[815]: lo: Gained carrier Dec 12 17:20:24.835226 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:20:24.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.836416 systemd[1]: Reached target network.target - Network. Dec 12 17:20:24.897492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:20:24.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:24.902753 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:20:24.956533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:20:24.975544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:20:24.982789 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:20:24.990569 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 12 17:20:24.990609 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 12 17:20:24.991003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:20:24.995234 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 12 17:20:24.994597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:20:25.019336 disk-uuid[876]: Primary Header is updated. Dec 12 17:20:25.019336 disk-uuid[876]: Secondary Entries is updated. Dec 12 17:20:25.019336 disk-uuid[876]: Secondary Header is updated. Dec 12 17:20:25.032088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:20:25.032213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:20:25.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:25.034378 systemd-networkd[815]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:20:25.034382 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:20:25.035174 systemd-networkd[815]: eth0: Link UP Dec 12 17:20:25.035695 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:20:25.035892 systemd-networkd[815]: eth0: Gained carrier Dec 12 17:20:25.035903 systemd-networkd[815]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:20:25.053859 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 12 17:20:25.054076 kernel: usbcore: registered new interface driver usbhid Dec 12 17:20:25.054089 kernel: usbhid: USB HID core driver Dec 12 17:20:25.040901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:20:25.076528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:20:25.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:25.094492 systemd-networkd[815]: eth0: DHCPv4 address 10.0.6.252/25, gateway 10.0.6.129 acquired from 10.0.6.129 Dec 12 17:20:25.122027 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:20:25.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:25.123626 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:20:25.125277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:20:25.127416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:20:25.130274 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:20:25.159632 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:20:25.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.065780 disk-uuid[877]: Warning: The kernel is still using the old partition table. Dec 12 17:20:26.065780 disk-uuid[877]: The new table will be used at the next reboot or after you Dec 12 17:20:26.065780 disk-uuid[877]: run partprobe(8) or kpartx(8) Dec 12 17:20:26.065780 disk-uuid[877]: The operation has completed successfully. Dec 12 17:20:26.075746 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:20:26.076481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:20:26.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.078661 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:20:26.119431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Dec 12 17:20:26.121950 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:20:26.122005 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:20:26.126447 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:20:26.126502 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:20:26.132426 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:20:26.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.133482 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:20:26.136386 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:20:26.278146 ignition[931]: Ignition 2.22.0 Dec 12 17:20:26.278160 ignition[931]: Stage: fetch-offline Dec 12 17:20:26.278200 ignition[931]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:26.278211 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:26.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.280803 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:20:26.278371 ignition[931]: parsed url from cmdline: "" Dec 12 17:20:26.283876 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:20:26.278374 ignition[931]: no config URL provided Dec 12 17:20:26.278380 ignition[931]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:20:26.278389 ignition[931]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:20:26.278393 ignition[931]: failed to fetch config: resource requires networking Dec 12 17:20:26.278641 ignition[931]: Ignition finished successfully Dec 12 17:20:26.324452 ignition[945]: Ignition 2.22.0 Dec 12 17:20:26.324465 ignition[945]: Stage: fetch Dec 12 17:20:26.324602 ignition[945]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:26.324611 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:26.324689 ignition[945]: parsed url from cmdline: "" Dec 12 17:20:26.324692 ignition[945]: no config URL provided Dec 12 17:20:26.324696 ignition[945]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:20:26.324701 ignition[945]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:20:26.324954 ignition[945]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 17:20:26.324974 ignition[945]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 17:20:26.325204 ignition[945]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 17:20:26.650792 ignition[945]: GET result: OK Dec 12 17:20:26.651041 ignition[945]: parsing config with SHA512: 40263b7ae07846bf1650691c6f9c95998bfff32e0f67c18e4387d1ab3313785d8da0507bb1e889571a3d9d3cfa2a795dba64d3b147e1c98522a5ef202433e0e7 Dec 12 17:20:26.655791 unknown[945]: fetched base config from "system" Dec 12 17:20:26.655802 unknown[945]: fetched base config from "system" Dec 12 17:20:26.656116 ignition[945]: fetch: fetch complete Dec 12 17:20:26.655808 unknown[945]: fetched user config from "openstack" Dec 12 17:20:26.656121 ignition[945]: fetch: fetch passed Dec 12 17:20:26.658150 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:20:26.663949 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 12 17:20:26.663973 kernel: audit: type=1130 audit(1765560026.659:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.656157 ignition[945]: Ignition finished successfully Dec 12 17:20:26.663551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:20:26.682497 systemd-networkd[815]: eth0: Gained IPv6LL Dec 12 17:20:26.691218 ignition[953]: Ignition 2.22.0 Dec 12 17:20:26.691234 ignition[953]: Stage: kargs Dec 12 17:20:26.691378 ignition[953]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:26.691386 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:26.692162 ignition[953]: kargs: kargs passed Dec 12 17:20:26.692207 ignition[953]: Ignition finished successfully Dec 12 17:20:26.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.697372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:20:26.701907 kernel: audit: type=1130 audit(1765560026.697:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.701146 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:20:26.725726 ignition[960]: Ignition 2.22.0 Dec 12 17:20:26.725743 ignition[960]: Stage: disks Dec 12 17:20:26.725883 ignition[960]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:26.728997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:20:26.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.725891 ignition[960]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:26.734995 kernel: audit: type=1130 audit(1765560026.729:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.730136 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:20:26.726627 ignition[960]: disks: disks passed Dec 12 17:20:26.734061 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:20:26.726671 ignition[960]: Ignition finished successfully Dec 12 17:20:26.736067 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:20:26.737867 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:20:26.739271 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:20:26.742125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:20:26.785370 systemd-fsck[970]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 17:20:26.787876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:20:26.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.790271 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:20:26.794370 kernel: audit: type=1130 audit(1765560026.789:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:26.887454 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:20:26.888019 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:20:26.889341 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:20:26.892532 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:20:26.894170 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:20:26.895211 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:20:26.895816 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 17:20:26.896935 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:20:26.896965 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:20:26.905066 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:20:26.907205 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:20:26.918414 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (978) Dec 12 17:20:26.920723 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:20:26.920774 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:20:26.924985 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:20:26.925034 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:20:26.926169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:20:26.960429 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:26.963249 initrd-setup-root[1007]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:20:26.968381 initrd-setup-root[1014]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:20:26.973113 initrd-setup-root[1021]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:20:26.977002 initrd-setup-root[1028]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:20:27.056780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:20:27.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.060456 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:20:27.062995 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:20:27.064851 kernel: audit: type=1130 audit(1765560027.057:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.075926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:20:27.078025 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:20:27.095496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:20:27.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.100439 kernel: audit: type=1130 audit(1765560027.096:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.102833 ignition[1096]: INFO : Ignition 2.22.0 Dec 12 17:20:27.102833 ignition[1096]: INFO : Stage: mount Dec 12 17:20:27.104361 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:27.104361 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:27.104361 ignition[1096]: INFO : mount: mount passed Dec 12 17:20:27.104361 ignition[1096]: INFO : Ignition finished successfully Dec 12 17:20:27.112220 kernel: audit: type=1130 audit(1765560027.108:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:27.106873 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:20:27.990446 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:29.995432 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:34.000429 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:34.004711 coreos-metadata[980]: Dec 12 17:20:34.004 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:20:34.023520 coreos-metadata[980]: Dec 12 17:20:34.023 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:20:34.151674 coreos-metadata[980]: Dec 12 17:20:34.151 INFO Fetch successful Dec 12 17:20:34.152862 coreos-metadata[980]: Dec 12 17:20:34.151 INFO wrote hostname ci-4515-1-0-8-acd31a5336 to /sysroot/etc/hostname Dec 12 17:20:34.154746 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 17:20:34.154835 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 17:20:34.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:34.161456 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:20:34.167388 kernel: audit: type=1130 audit(1765560034.159:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:34.167424 kernel: audit: type=1131 audit(1765560034.159:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:34.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:34.189741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:20:34.208600 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1114) Dec 12 17:20:34.208649 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:20:34.208660 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:20:34.214725 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:20:34.214795 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:20:34.216288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:20:34.260257 ignition[1132]: INFO : Ignition 2.22.0 Dec 12 17:20:34.260257 ignition[1132]: INFO : Stage: files Dec 12 17:20:34.262231 ignition[1132]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:34.262231 ignition[1132]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:34.262231 ignition[1132]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:20:34.262231 ignition[1132]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:20:34.262231 ignition[1132]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:20:34.269356 ignition[1132]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:20:34.269356 ignition[1132]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:20:34.269356 ignition[1132]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:20:34.269356 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:20:34.269356 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:20:34.266783 unknown[1132]: wrote ssh authorized keys file for user: core Dec 12 17:20:35.741104 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:20:38.069853 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:20:38.071959 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:20:38.085759 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:20:38.459541 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:20:39.941289 ignition[1132]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:20:39.941289 ignition[1132]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:20:39.945199 ignition[1132]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:20:39.948134 ignition[1132]: INFO : files: files passed Dec 12 17:20:39.948134 ignition[1132]: INFO : Ignition finished successfully Dec 12 17:20:39.963839 kernel: audit: type=1130 audit(1765560039.954:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.952447 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:20:39.955663 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:20:39.980579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:20:39.983275 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:20:39.983376 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:20:39.990998 kernel: audit: type=1130 audit(1765560039.985:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.991026 kernel: audit: type=1131 audit(1765560039.985:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.994436 initrd-setup-root-after-ignition[1166]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:20:39.994436 initrd-setup-root-after-ignition[1166]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:20:39.997483 initrd-setup-root-after-ignition[1170]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:20:40.002486 kernel: audit: type=1130 audit(1765560039.998:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:39.996457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:20:39.999098 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:20:40.005531 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:20:40.041833 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:20:40.042482 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:20:40.049564 kernel: audit: type=1130 audit(1765560040.043:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.049593 kernel: audit: type=1131 audit(1765560040.043:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.044096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:20:40.050919 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:20:40.052790 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:20:40.053729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:20:40.095235 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:20:40.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.097779 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:20:40.101894 kernel: audit: type=1130 audit(1765560040.096:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.117564 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:20:40.117694 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:20:40.120218 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:20:40.122247 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:20:40.123989 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:20:40.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.124112 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:20:40.129754 kernel: audit: type=1131 audit(1765560040.125:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.128840 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:20:40.130703 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:20:40.132318 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:20:40.134007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:20:40.135822 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:20:40.137668 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:20:40.139530 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:20:40.141328 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:20:40.143258 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:20:40.145190 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:20:40.146854 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:20:40.148263 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:20:40.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.148445 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:20:40.154041 kernel: audit: type=1131 audit(1765560040.149:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.153158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:20:40.155094 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:20:40.156989 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:20:40.160510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:20:40.162086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:20:40.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.162205 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:20:40.168262 kernel: audit: type=1131 audit(1765560040.164:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.167354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:20:40.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.167721 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:20:40.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.169547 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:20:40.169647 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:20:40.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.172251 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:20:40.173236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:20:40.173382 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:20:40.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.175998 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:20:40.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.177654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:20:40.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.177791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:20:40.179680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:20:40.179781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:20:40.181597 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:20:40.181696 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:20:40.186935 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:20:40.191287 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:20:40.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.203672 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:20:40.206223 ignition[1190]: INFO : Ignition 2.22.0 Dec 12 17:20:40.206223 ignition[1190]: INFO : Stage: umount Dec 12 17:20:40.207815 ignition[1190]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:20:40.207815 ignition[1190]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:20:40.207815 ignition[1190]: INFO : umount: umount passed Dec 12 17:20:40.207815 ignition[1190]: INFO : Ignition finished successfully Dec 12 17:20:40.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.208230 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:20:40.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.208369 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:20:40.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.209824 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:20:40.209893 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:20:40.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.211958 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:20:40.212048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:20:40.213135 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:20:40.213181 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:20:40.214723 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:20:40.214772 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:20:40.216471 systemd[1]: Stopped target network.target - Network. Dec 12 17:20:40.218041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:20:40.218098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:20:40.219840 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:20:40.221262 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:20:40.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.224774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:20:40.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.225917 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:20:40.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.227559 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:20:40.229339 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:20:40.229383 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:20:40.230915 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:20:40.230952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:20:40.232624 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:20:40.232648 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:20:40.234382 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:20:40.234460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:20:40.235999 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:20:40.236041 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:20:40.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.237780 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:20:40.237828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:20:40.239999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:20:40.241534 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:20:40.250527 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:20:40.250637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:20:40.258000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:20:40.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.258047 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:20:40.258153 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:20:40.261672 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:20:40.263381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:20:40.263436 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:20:40.266000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:20:40.266112 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:20:40.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.267057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:20:40.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.267124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:20:40.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.269307 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:20:40.269358 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:20:40.271109 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:20:40.271154 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:20:40.272952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:20:40.284081 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:20:40.284234 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:20:40.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.286466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:20:40.286505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:20:40.288255 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:20:40.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.288284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:20:40.290682 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:20:40.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.290736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:20:40.293233 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:20:40.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.293282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:20:40.295952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:20:40.296000 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:20:40.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.299455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:20:40.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.300458 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:20:40.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.300519 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:20:40.302437 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:20:40.302485 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:20:40.304586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:20:40.304634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:20:40.325680 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:20:40.325802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:20:40.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.328288 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:20:40.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:40.328432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:20:40.330499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:20:40.332500 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:20:40.349328 systemd[1]: Switching root. Dec 12 17:20:40.382038 systemd-journald[418]: Journal stopped Dec 12 17:20:41.210735 systemd-journald[418]: Received SIGTERM from PID 1 (systemd). Dec 12 17:20:41.210815 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:20:41.210837 kernel: SELinux: policy capability open_perms=1 Dec 12 17:20:41.210852 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:20:41.210862 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:20:41.210879 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:20:41.210892 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:20:41.210906 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:20:41.210916 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:20:41.210929 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:20:41.210940 systemd[1]: Successfully loaded SELinux policy in 60.934ms. Dec 12 17:20:41.210956 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.683ms. Dec 12 17:20:41.210967 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:20:41.210981 systemd[1]: Detected virtualization kvm. Dec 12 17:20:41.210992 systemd[1]: Detected architecture arm64. Dec 12 17:20:41.211006 systemd[1]: Detected first boot. Dec 12 17:20:41.211017 systemd[1]: Hostname set to . Dec 12 17:20:41.211030 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:20:41.211041 zram_generator::config[1236]: No configuration found. Dec 12 17:20:41.211059 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:20:41.211070 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:20:41.211080 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:20:41.211092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:20:41.211104 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:20:41.211116 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:20:41.211127 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:20:41.211138 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:20:41.211149 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:20:41.211161 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:20:41.211174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:20:41.211184 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:20:41.211195 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:20:41.211206 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:20:41.211217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:20:41.211227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:20:41.211242 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:20:41.211252 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:20:41.211264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:20:41.211275 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:20:41.211285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:20:41.211296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:20:41.211308 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:20:41.211319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:20:41.211330 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:20:41.211341 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:20:41.211351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:20:41.211365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:20:41.211376 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:20:41.211388 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:20:41.211415 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:20:41.211428 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:20:41.211438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:20:41.211449 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:20:41.211460 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:20:41.211471 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:20:41.211483 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:20:41.211495 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:20:41.211505 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:20:41.211516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:20:41.211527 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:20:41.211539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:20:41.211549 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:20:41.211561 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:20:41.211572 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:20:41.211586 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:20:41.211597 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:20:41.211608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:20:41.211619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:20:41.211629 systemd[1]: Reached target machines.target - Containers. Dec 12 17:20:41.211642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:20:41.211653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:20:41.211663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:20:41.211674 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:20:41.211687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:20:41.211699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:20:41.211712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:20:41.211725 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:20:41.211737 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:20:41.211748 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:20:41.211760 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:20:41.211771 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:20:41.211782 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:20:41.211793 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:20:41.211804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:20:41.211815 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:20:41.211828 kernel: fuse: init (API version 7.41) Dec 12 17:20:41.211841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:20:41.211851 kernel: ACPI: bus type drm_connector registered Dec 12 17:20:41.211861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:20:41.211872 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:20:41.211883 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:20:41.211894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:20:41.211905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:20:41.211918 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:20:41.211928 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:20:41.211964 systemd-journald[1305]: Collecting audit messages is enabled. Dec 12 17:20:41.211994 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:20:41.212005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:20:41.212017 systemd-journald[1305]: Journal started Dec 12 17:20:41.212042 systemd-journald[1305]: Runtime Journal (/run/log/journal/d8883f0c4de54136a01fead669ac4870) is 8M, max 319.5M, 311.5M free. Dec 12 17:20:41.075000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:20:41.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.167000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:20:41.167000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:20:41.168000 audit: BPF prog-id=15 op=LOAD Dec 12 17:20:41.168000 audit: BPF prog-id=16 op=LOAD Dec 12 17:20:41.168000 audit: BPF prog-id=17 op=LOAD Dec 12 17:20:41.208000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:20:41.208000 audit[1305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdb431130 a2=4000 a3=0 items=0 ppid=1 pid=1305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:41.208000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:20:40.979796 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:20:41.002643 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:20:41.003084 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:20:41.215431 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:20:41.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.216567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:20:41.217820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:20:41.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.219434 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:20:41.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.220779 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:20:41.220943 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:20:41.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.222508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:20:41.222672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:20:41.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.224121 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:20:41.224284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:20:41.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.225726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:20:41.225883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:20:41.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.227504 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:20:41.227679 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:20:41.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.228966 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:20:41.229126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:20:41.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.230547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:20:41.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.232612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:20:41.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.235005 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:20:41.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.236700 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:20:41.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.248958 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:20:41.251153 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:20:41.253533 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:20:41.255567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:20:41.256695 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:20:41.256742 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:20:41.258614 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:20:41.259943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:20:41.260055 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:20:41.268577 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:20:41.270648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:20:41.271830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:20:41.276264 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:20:41.277890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:20:41.280598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:20:41.283920 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:20:41.286995 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:20:41.290181 systemd-journald[1305]: Time spent on flushing to /var/log/journal/d8883f0c4de54136a01fead669ac4870 is 29.541ms for 1817 entries. Dec 12 17:20:41.290181 systemd-journald[1305]: System Journal (/var/log/journal/d8883f0c4de54136a01fead669ac4870) is 8M, max 588.1M, 580.1M free. Dec 12 17:20:41.339261 systemd-journald[1305]: Received client request to flush runtime journal. Dec 12 17:20:41.339318 kernel: loop1: detected capacity change from 0 to 109872 Dec 12 17:20:41.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.290973 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:20:41.294467 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:20:41.296795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:20:41.298551 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:20:41.302323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:20:41.305253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:20:41.317391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:20:41.340522 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:20:41.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.350373 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:20:41.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.353431 kernel: loop2: detected capacity change from 0 to 1648 Dec 12 17:20:41.354172 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:20:41.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.356000 audit: BPF prog-id=18 op=LOAD Dec 12 17:20:41.356000 audit: BPF prog-id=19 op=LOAD Dec 12 17:20:41.356000 audit: BPF prog-id=20 op=LOAD Dec 12 17:20:41.358291 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:20:41.360000 audit: BPF prog-id=21 op=LOAD Dec 12 17:20:41.361611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:20:41.365925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:20:41.379000 audit: BPF prog-id=22 op=LOAD Dec 12 17:20:41.379000 audit: BPF prog-id=23 op=LOAD Dec 12 17:20:41.379000 audit: BPF prog-id=24 op=LOAD Dec 12 17:20:41.380878 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:20:41.382000 audit: BPF prog-id=25 op=LOAD Dec 12 17:20:41.382000 audit: BPF prog-id=26 op=LOAD Dec 12 17:20:41.382000 audit: BPF prog-id=27 op=LOAD Dec 12 17:20:41.383511 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:20:41.388412 kernel: loop3: detected capacity change from 0 to 100192 Dec 12 17:20:41.396089 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 12 17:20:41.396113 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 12 17:20:41.401566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:20:41.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.429480 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:20:41.429886 systemd-nsresourced[1376]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:20:41.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.432191 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:20:41.433302 kernel: loop4: detected capacity change from 0 to 207008 Dec 12 17:20:41.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.483413 kernel: loop5: detected capacity change from 0 to 109872 Dec 12 17:20:41.487615 systemd-oomd[1372]: No swap; memory pressure usage will be degraded Dec 12 17:20:41.488843 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:20:41.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.495538 systemd-resolved[1373]: Positive Trust Anchors: Dec 12 17:20:41.495803 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:20:41.495861 systemd-resolved[1373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:20:41.495895 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:20:41.496420 kernel: loop6: detected capacity change from 0 to 1648 Dec 12 17:20:41.501427 kernel: loop7: detected capacity change from 0 to 100192 Dec 12 17:20:41.502725 systemd-resolved[1373]: Using system hostname 'ci-4515-1-0-8-acd31a5336'. Dec 12 17:20:41.504256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:20:41.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.506001 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:20:41.516421 kernel: loop1: detected capacity change from 0 to 207008 Dec 12 17:20:41.538157 (sd-merge)[1396]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 12 17:20:41.541108 (sd-merge)[1396]: Merged extensions into '/usr'. Dec 12 17:20:41.544650 systemd[1]: Reload requested from client PID 1356 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:20:41.544672 systemd[1]: Reloading... Dec 12 17:20:41.601431 zram_generator::config[1425]: No configuration found. Dec 12 17:20:41.752176 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:20:41.752558 systemd[1]: Reloading finished in 207 ms. Dec 12 17:20:41.782778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:20:41.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.785437 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:20:41.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:41.799993 systemd[1]: Starting ensure-sysext.service... Dec 12 17:20:41.801942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:20:41.802000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:20:41.802000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:20:41.803000 audit: BPF prog-id=28 op=LOAD Dec 12 17:20:41.803000 audit: BPF prog-id=29 op=LOAD Dec 12 17:20:41.804431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:20:41.805000 audit: BPF prog-id=30 op=LOAD Dec 12 17:20:41.805000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:20:41.805000 audit: BPF prog-id=31 op=LOAD Dec 12 17:20:41.805000 audit: BPF prog-id=32 op=LOAD Dec 12 17:20:41.805000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:20:41.805000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:20:41.806000 audit: BPF prog-id=33 op=LOAD Dec 12 17:20:41.807000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:20:41.807000 audit: BPF prog-id=34 op=LOAD Dec 12 17:20:41.807000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:20:41.807000 audit: BPF prog-id=35 op=LOAD Dec 12 17:20:41.807000 audit: BPF prog-id=36 op=LOAD Dec 12 17:20:41.807000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:20:41.807000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:20:41.808000 audit: BPF prog-id=37 op=LOAD Dec 12 17:20:41.808000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:20:41.808000 audit: BPF prog-id=38 op=LOAD Dec 12 17:20:41.808000 audit: BPF prog-id=39 op=LOAD Dec 12 17:20:41.808000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:20:41.808000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:20:41.809000 audit: BPF prog-id=40 op=LOAD Dec 12 17:20:41.809000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:20:41.809000 audit: BPF prog-id=41 op=LOAD Dec 12 17:20:41.809000 audit: BPF prog-id=42 op=LOAD Dec 12 17:20:41.809000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:20:41.809000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:20:41.813828 systemd[1]: Reload requested from client PID 1463 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:20:41.813848 systemd[1]: Reloading... Dec 12 17:20:41.817780 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:20:41.817816 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:20:41.818079 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:20:41.819098 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Dec 12 17:20:41.819173 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Dec 12 17:20:41.824720 systemd-tmpfiles[1464]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:20:41.824734 systemd-tmpfiles[1464]: Skipping /boot Dec 12 17:20:41.831063 systemd-tmpfiles[1464]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:20:41.831084 systemd-tmpfiles[1464]: Skipping /boot Dec 12 17:20:41.832619 systemd-udevd[1465]: Using default interface naming scheme 'v257'. Dec 12 17:20:41.875487 zram_generator::config[1497]: No configuration found. Dec 12 17:20:41.974433 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:20:42.028169 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Dec 12 17:20:42.028249 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 17:20:42.028276 kernel: [drm] features: -context_init Dec 12 17:20:42.031449 kernel: [drm] number of scanouts: 1 Dec 12 17:20:42.031519 kernel: [drm] number of cap sets: 0 Dec 12 17:20:42.032689 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Dec 12 17:20:42.036422 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 17:20:42.055526 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 17:20:42.058238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:20:42.061765 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:20:42.062253 systemd[1]: Reloading finished in 248 ms. Dec 12 17:20:42.073574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:20:42.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.076000 audit: BPF prog-id=43 op=LOAD Dec 12 17:20:42.076000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:20:42.076000 audit: BPF prog-id=44 op=LOAD Dec 12 17:20:42.076000 audit: BPF prog-id=45 op=LOAD Dec 12 17:20:42.076000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:20:42.076000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:20:42.076000 audit: BPF prog-id=46 op=LOAD Dec 12 17:20:42.076000 audit: BPF prog-id=47 op=LOAD Dec 12 17:20:42.076000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:20:42.076000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:20:42.077000 audit: BPF prog-id=48 op=LOAD Dec 12 17:20:42.077000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:20:42.077000 audit: BPF prog-id=49 op=LOAD Dec 12 17:20:42.077000 audit: BPF prog-id=50 op=LOAD Dec 12 17:20:42.077000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:20:42.077000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:20:42.078000 audit: BPF prog-id=51 op=LOAD Dec 12 17:20:42.078000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:20:42.078000 audit: BPF prog-id=52 op=LOAD Dec 12 17:20:42.078000 audit: BPF prog-id=53 op=LOAD Dec 12 17:20:42.078000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:20:42.078000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:20:42.079000 audit: BPF prog-id=54 op=LOAD Dec 12 17:20:42.079000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:20:42.079000 audit: BPF prog-id=55 op=LOAD Dec 12 17:20:42.079000 audit: BPF prog-id=56 op=LOAD Dec 12 17:20:42.079000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:20:42.079000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:20:42.080000 audit: BPF prog-id=57 op=LOAD Dec 12 17:20:42.080000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:20:42.091879 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:20:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.135867 systemd[1]: Finished ensure-sysext.service. Dec 12 17:20:42.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.142680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:20:42.145181 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:20:42.146560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:20:42.153511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:20:42.155894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:20:42.159567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:20:42.162560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:20:42.164654 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 12 17:20:42.166150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:20:42.166259 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:20:42.167677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:20:42.169538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:20:42.171026 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:20:42.172447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:20:42.174000 audit: BPF prog-id=58 op=LOAD Dec 12 17:20:42.175668 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:20:42.176957 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:20:42.181641 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:20:42.183955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:20:42.186124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:20:42.187436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:20:42.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.195409 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 17:20:42.195470 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 17:20:42.194548 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:20:42.194751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:20:42.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.198034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:20:42.198224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:20:42.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.200798 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:20:42.200960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:20:42.201461 kernel: PTP clock support registered Dec 12 17:20:42.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.204236 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 12 17:20:42.204555 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 12 17:20:42.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.206000 audit[1602]: SYSTEM_BOOT pid=1602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.213987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:20:42.214125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:20:42.222639 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:20:42.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.224320 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:20:42.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.229603 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:20:42.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:20:42.235000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:20:42.235000 audit[1630]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6851b70 a2=420 a3=0 items=0 ppid=1585 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:20:42.235000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:20:42.236348 augenrules[1630]: No rules Dec 12 17:20:42.237979 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:20:42.238293 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:20:42.277450 systemd-networkd[1601]: lo: Link UP Dec 12 17:20:42.277468 systemd-networkd[1601]: lo: Gained carrier Dec 12 17:20:42.278603 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:20:42.278993 systemd-networkd[1601]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:20:42.278997 systemd-networkd[1601]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:20:42.279420 systemd-networkd[1601]: eth0: Link UP Dec 12 17:20:42.280437 systemd-networkd[1601]: eth0: Gained carrier Dec 12 17:20:42.280454 systemd-networkd[1601]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:20:42.281876 systemd[1]: Reached target network.target - Network. Dec 12 17:20:42.284219 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:20:42.286509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:20:42.288110 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:20:42.290481 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:20:42.297973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:20:42.299437 systemd-networkd[1601]: eth0: DHCPv4 address 10.0.6.252/25, gateway 10.0.6.129 acquired from 10.0.6.129 Dec 12 17:20:42.309110 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:20:42.629440 ldconfig[1593]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:20:42.633819 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:20:42.636828 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:20:42.660364 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:20:42.662073 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:20:42.663274 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:20:42.664561 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:20:42.665868 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:20:42.666976 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:20:42.668205 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:20:42.669525 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:20:42.670675 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:20:42.671828 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:20:42.671862 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:20:42.672710 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:20:42.674916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:20:42.677216 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:20:42.680017 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:20:42.681441 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:20:42.682576 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:20:42.686348 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:20:42.687684 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:20:42.689371 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:20:42.690441 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:20:42.691297 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:20:42.692270 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:20:42.692317 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:20:42.694581 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 17:20:42.696243 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:20:42.698434 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:20:42.701598 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:20:42.703317 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:20:42.706422 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:42.706984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:20:42.711657 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:20:42.712680 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:20:42.713778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:20:42.715165 jq[1657]: false Dec 12 17:20:42.716344 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:20:42.719388 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:20:42.733257 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:20:42.736986 chronyd[1650]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 17:20:42.738535 chronyd[1650]: Loaded seccomp filter (level 2) Dec 12 17:20:42.738573 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:20:42.739526 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:20:42.739989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:20:42.740561 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:20:42.742487 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:20:42.744940 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 17:20:42.749440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:20:42.751460 extend-filesystems[1658]: Found /dev/vda6 Dec 12 17:20:42.750960 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:20:42.751163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:20:42.753232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:20:42.755499 extend-filesystems[1658]: Found /dev/vda9 Dec 12 17:20:42.753468 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:20:42.760107 extend-filesystems[1658]: Checking size of /dev/vda9 Dec 12 17:20:42.762951 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:20:42.766638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:20:42.767598 jq[1670]: true Dec 12 17:20:42.775584 tar[1680]: linux-arm64/LICENSE Dec 12 17:20:42.776049 tar[1680]: linux-arm64/helm Dec 12 17:20:42.778486 extend-filesystems[1658]: Resized partition /dev/vda9 Dec 12 17:20:42.784423 update_engine[1668]: I20251212 17:20:42.783901 1668 main.cc:92] Flatcar Update Engine starting Dec 12 17:20:42.784758 extend-filesystems[1703]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:20:42.792872 jq[1696]: true Dec 12 17:20:42.800010 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 12 17:20:42.820932 dbus-daemon[1653]: [system] SELinux support is enabled Dec 12 17:20:42.821274 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:20:42.826286 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:20:42.826327 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:20:42.829609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:20:42.829639 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:20:42.835244 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:20:42.835377 update_engine[1668]: I20251212 17:20:42.835263 1668 update_check_scheduler.cc:74] Next update check in 6m49s Dec 12 17:20:42.840750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:20:42.869768 systemd-logind[1663]: New seat seat0. Dec 12 17:20:42.917200 systemd-logind[1663]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:20:42.917232 systemd-logind[1663]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 12 17:20:42.917524 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:20:42.921277 locksmithd[1715]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:20:42.961089 containerd[1697]: time="2025-12-12T17:20:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:20:42.970033 bash[1722]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:20:42.970379 containerd[1697]: time="2025-12-12T17:20:42.970098160Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:20:42.971454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:20:42.976685 systemd[1]: Starting sshkeys.service... Dec 12 17:20:42.982906 containerd[1697]: time="2025-12-12T17:20:42.982726520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.4µs" Dec 12 17:20:42.982906 containerd[1697]: time="2025-12-12T17:20:42.982765520Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:20:42.982906 containerd[1697]: time="2025-12-12T17:20:42.982808080Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:20:42.982906 containerd[1697]: time="2025-12-12T17:20:42.982819640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:20:42.983037 containerd[1697]: time="2025-12-12T17:20:42.982958640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:20:42.983037 containerd[1697]: time="2025-12-12T17:20:42.982973840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983037 containerd[1697]: time="2025-12-12T17:20:42.983026320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983156 containerd[1697]: time="2025-12-12T17:20:42.983037760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983319120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983343600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983355280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983363040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983535600Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983605 containerd[1697]: time="2025-12-12T17:20:42.983550080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983731 containerd[1697]: time="2025-12-12T17:20:42.983618240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983854 containerd[1697]: time="2025-12-12T17:20:42.983768400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983854 containerd[1697]: time="2025-12-12T17:20:42.983802400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:20:42.983854 containerd[1697]: time="2025-12-12T17:20:42.983811880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:20:42.983854 containerd[1697]: time="2025-12-12T17:20:42.983843560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:20:42.984106 containerd[1697]: time="2025-12-12T17:20:42.984084040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:20:42.984355 containerd[1697]: time="2025-12-12T17:20:42.984148160Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:20:42.995827 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:20:43.000706 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:20:43.015574 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016200960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016301840Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016396200Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016427640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016441200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016453840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016465920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:20:43.016509 containerd[1697]: time="2025-12-12T17:20:43.016476800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016763320Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016792400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016808520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016821240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016830840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016843600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016978920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.016998840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017014200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017025680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017036480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017063880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017080480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017092320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:20:43.018212 containerd[1697]: time="2025-12-12T17:20:43.017104800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017115840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017128960Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017157480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017195400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017208920Z" level=info msg="Start snapshots syncer" Dec 12 17:20:43.018552 containerd[1697]: time="2025-12-12T17:20:43.017234120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:20:43.018650 containerd[1697]: time="2025-12-12T17:20:43.017525200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:20:43.018650 containerd[1697]: time="2025-12-12T17:20:43.017571080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017612840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017713600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017734360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017745200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017757400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017769280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017779720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017789680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017799840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017812360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017847880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017861120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:20:43.018877 containerd[1697]: time="2025-12-12T17:20:43.017868720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017884280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017892480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017903920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017914200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017925800Z" level=info msg="runtime interface created" Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017930480Z" level=info msg="created NRI interface" Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017938080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017949240Z" level=info msg="Connect containerd service" Dec 12 17:20:43.019093 containerd[1697]: time="2025-12-12T17:20:43.017968040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:20:43.023887 containerd[1697]: time="2025-12-12T17:20:43.023555880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104164360Z" level=info msg="Start subscribing containerd event" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104225160Z" level=info msg="Start recovering state" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104394800Z" level=info msg="Start event monitor" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104431120Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104439560Z" level=info msg="Start streaming server" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104448640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104455760Z" level=info msg="runtime interface starting up..." Dec 12 17:20:43.104445 containerd[1697]: time="2025-12-12T17:20:43.104461440Z" level=info msg="starting plugins..." Dec 12 17:20:43.105613 containerd[1697]: time="2025-12-12T17:20:43.104476960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:20:43.105613 containerd[1697]: time="2025-12-12T17:20:43.104664520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:20:43.105613 containerd[1697]: time="2025-12-12T17:20:43.104710560Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:20:43.105613 containerd[1697]: time="2025-12-12T17:20:43.104803160Z" level=info msg="containerd successfully booted in 0.144064s" Dec 12 17:20:43.105022 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:20:43.109450 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 12 17:20:43.135543 extend-filesystems[1703]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:20:43.135543 extend-filesystems[1703]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 12 17:20:43.135543 extend-filesystems[1703]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 12 17:20:43.140012 extend-filesystems[1658]: Resized filesystem in /dev/vda9 Dec 12 17:20:43.138073 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:20:43.138749 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:20:43.229076 tar[1680]: linux-arm64/README.md Dec 12 17:20:43.252136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:20:43.492718 sshd_keygen[1678]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:20:43.511368 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:20:43.514309 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:20:43.530029 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:20:43.530311 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:20:43.535972 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:20:43.556792 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:20:43.560482 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:20:43.563106 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:20:43.565003 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:20:43.721432 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:43.834587 systemd-networkd[1601]: eth0: Gained IPv6LL Dec 12 17:20:43.837221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:20:43.839205 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:20:43.841765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:20:43.844192 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:20:43.875132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:20:44.027446 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:44.659251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:20:44.670973 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:20:45.179241 kubelet[1795]: E1212 17:20:45.179170 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:20:45.181376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:20:45.181527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:20:45.182047 systemd[1]: kubelet.service: Consumed 760ms CPU time, 256.5M memory peak. Dec 12 17:20:45.729431 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:46.038439 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:49.736425 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:49.743774 coreos-metadata[1652]: Dec 12 17:20:49.743 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:20:49.759152 coreos-metadata[1652]: Dec 12 17:20:49.759 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 17:20:50.050436 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:20:50.059964 coreos-metadata[1737]: Dec 12 17:20:50.059 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:20:50.073156 coreos-metadata[1737]: Dec 12 17:20:50.073 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 17:20:51.760497 coreos-metadata[1652]: Dec 12 17:20:51.760 INFO Fetch successful Dec 12 17:20:51.760985 coreos-metadata[1652]: Dec 12 17:20:51.760 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:20:51.886330 coreos-metadata[1737]: Dec 12 17:20:51.886 INFO Fetch successful Dec 12 17:20:51.886330 coreos-metadata[1737]: Dec 12 17:20:51.886 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:20:52.040937 coreos-metadata[1652]: Dec 12 17:20:52.040 INFO Fetch successful Dec 12 17:20:52.040937 coreos-metadata[1652]: Dec 12 17:20:52.040 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 17:20:52.047866 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:20:52.049162 systemd[1]: Started sshd@0-10.0.6.252:22-139.178.89.65:54456.service - OpenSSH per-connection server daemon (139.178.89.65:54456). Dec 12 17:20:52.132057 coreos-metadata[1737]: Dec 12 17:20:52.131 INFO Fetch successful Dec 12 17:20:52.137971 unknown[1737]: wrote ssh authorized keys file for user: core Dec 12 17:20:52.165522 update-ssh-keys[1818]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:20:52.166154 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:20:52.167522 systemd[1]: Finished sshkeys.service. Dec 12 17:20:52.176125 coreos-metadata[1652]: Dec 12 17:20:52.176 INFO Fetch successful Dec 12 17:20:52.176125 coreos-metadata[1652]: Dec 12 17:20:52.176 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 17:20:52.309545 coreos-metadata[1652]: Dec 12 17:20:52.309 INFO Fetch successful Dec 12 17:20:52.309545 coreos-metadata[1652]: Dec 12 17:20:52.309 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 17:20:52.442236 coreos-metadata[1652]: Dec 12 17:20:52.442 INFO Fetch successful Dec 12 17:20:52.442236 coreos-metadata[1652]: Dec 12 17:20:52.442 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 17:20:52.577687 coreos-metadata[1652]: Dec 12 17:20:52.577 INFO Fetch successful Dec 12 17:20:52.622278 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:20:52.622902 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:20:52.623038 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:20:52.623185 systemd[1]: Startup finished in 2.733s (kernel) + 16.356s (initrd) + 12.194s (userspace) = 31.283s. Dec 12 17:20:52.909262 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 54456 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:20:52.911435 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:52.917867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:20:52.918930 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:20:52.922671 systemd-logind[1663]: New session 1 of user core. Dec 12 17:20:52.945223 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:20:52.947618 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:20:52.961095 (systemd)[1828]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:20:52.964747 systemd-logind[1663]: New session c1 of user core. Dec 12 17:20:53.091789 systemd[1828]: Queued start job for default target default.target. Dec 12 17:20:53.114792 systemd[1828]: Created slice app.slice - User Application Slice. Dec 12 17:20:53.114832 systemd[1828]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:20:53.114845 systemd[1828]: Reached target paths.target - Paths. Dec 12 17:20:53.114896 systemd[1828]: Reached target timers.target - Timers. Dec 12 17:20:53.116188 systemd[1828]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:20:53.117038 systemd[1828]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:20:53.127032 systemd[1828]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:20:53.127116 systemd[1828]: Reached target sockets.target - Sockets. Dec 12 17:20:53.127705 systemd[1828]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:20:53.127802 systemd[1828]: Reached target basic.target - Basic System. Dec 12 17:20:53.127845 systemd[1828]: Reached target default.target - Main User Target. Dec 12 17:20:53.127870 systemd[1828]: Startup finished in 157ms. Dec 12 17:20:53.128202 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:20:53.129590 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:20:53.613740 systemd[1]: Started sshd@1-10.0.6.252:22-139.178.89.65:54468.service - OpenSSH per-connection server daemon (139.178.89.65:54468). Dec 12 17:20:54.434673 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 54468 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:20:54.435994 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:54.440826 systemd-logind[1663]: New session 2 of user core. Dec 12 17:20:54.448612 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:20:54.903343 sshd[1844]: Connection closed by 139.178.89.65 port 54468 Dec 12 17:20:54.903617 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:54.907599 systemd[1]: sshd@1-10.0.6.252:22-139.178.89.65:54468.service: Deactivated successfully. Dec 12 17:20:54.909822 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:20:54.910519 systemd-logind[1663]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:20:54.911843 systemd-logind[1663]: Removed session 2. Dec 12 17:20:55.073941 systemd[1]: Started sshd@2-10.0.6.252:22-139.178.89.65:54476.service - OpenSSH per-connection server daemon (139.178.89.65:54476). Dec 12 17:20:55.432193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:20:55.433665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:20:55.582143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:20:55.586325 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:20:55.883781 sshd[1850]: Accepted publickey for core from 139.178.89.65 port 54476 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:20:55.885022 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:55.889471 systemd-logind[1663]: New session 3 of user core. Dec 12 17:20:55.901786 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:20:55.939603 kubelet[1861]: E1212 17:20:55.939543 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:20:55.942847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:20:55.942978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:20:55.943344 systemd[1]: kubelet.service: Consumed 150ms CPU time, 106.3M memory peak. Dec 12 17:20:56.350247 sshd[1868]: Connection closed by 139.178.89.65 port 54476 Dec 12 17:20:56.350641 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:56.354347 systemd[1]: sshd@2-10.0.6.252:22-139.178.89.65:54476.service: Deactivated successfully. Dec 12 17:20:56.356984 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:20:56.357875 systemd-logind[1663]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:20:56.358910 systemd-logind[1663]: Removed session 3. Dec 12 17:20:56.519867 systemd[1]: Started sshd@3-10.0.6.252:22-139.178.89.65:54480.service - OpenSSH per-connection server daemon (139.178.89.65:54480). Dec 12 17:20:57.354755 sshd[1875]: Accepted publickey for core from 139.178.89.65 port 54480 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:20:57.356107 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:57.360469 systemd-logind[1663]: New session 4 of user core. Dec 12 17:20:57.369672 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:20:57.826703 sshd[1878]: Connection closed by 139.178.89.65 port 54480 Dec 12 17:20:57.827044 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:57.831677 systemd[1]: sshd@3-10.0.6.252:22-139.178.89.65:54480.service: Deactivated successfully. Dec 12 17:20:57.833385 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:20:57.835468 systemd-logind[1663]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:20:57.836410 systemd-logind[1663]: Removed session 4. Dec 12 17:20:57.992126 systemd[1]: Started sshd@4-10.0.6.252:22-139.178.89.65:54484.service - OpenSSH per-connection server daemon (139.178.89.65:54484). Dec 12 17:20:58.810285 sshd[1884]: Accepted publickey for core from 139.178.89.65 port 54484 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:20:58.811807 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:20:58.816462 systemd-logind[1663]: New session 5 of user core. Dec 12 17:20:58.823728 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:20:59.138569 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:20:59.138908 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:20:59.152764 sudo[1888]: pam_unix(sudo:session): session closed for user root Dec 12 17:20:59.308853 sshd[1887]: Connection closed by 139.178.89.65 port 54484 Dec 12 17:20:59.309710 sshd-session[1884]: pam_unix(sshd:session): session closed for user core Dec 12 17:20:59.313941 systemd[1]: sshd@4-10.0.6.252:22-139.178.89.65:54484.service: Deactivated successfully. Dec 12 17:20:59.315707 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:20:59.317083 systemd-logind[1663]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:20:59.318091 systemd-logind[1663]: Removed session 5. Dec 12 17:20:59.476082 systemd[1]: Started sshd@5-10.0.6.252:22-139.178.89.65:54500.service - OpenSSH per-connection server daemon (139.178.89.65:54500). Dec 12 17:21:00.292754 sshd[1894]: Accepted publickey for core from 139.178.89.65 port 54500 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:21:00.294110 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:00.298870 systemd-logind[1663]: New session 6 of user core. Dec 12 17:21:00.310850 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:21:00.612746 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:21:00.613013 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:00.617447 sudo[1899]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:00.623745 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:21:00.624011 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:00.632692 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:21:00.669000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:21:00.670108 augenrules[1921]: No rules Dec 12 17:21:00.671450 kernel: kauditd_printk_skb: 184 callbacks suppressed Dec 12 17:21:00.671531 kernel: audit: type=1305 audit(1765560060.669:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:21:00.671237 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:21:00.672485 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:21:00.669000 audit[1921]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff23dea90 a2=420 a3=0 items=0 ppid=1902 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:00.674073 sudo[1898]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:00.677070 kernel: audit: type=1300 audit(1765560060.669:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff23dea90 a2=420 a3=0 items=0 ppid=1902 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:00.677149 kernel: audit: type=1327 audit(1765560060.669:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:21:00.669000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:21:00.678899 kernel: audit: type=1130 audit(1765560060.672:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.684799 kernel: audit: type=1131 audit(1765560060.672:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.684856 kernel: audit: type=1106 audit(1765560060.673:233): pid=1898 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.673000 audit[1898]: USER_END pid=1898 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.673000 audit[1898]: CRED_DISP pid=1898 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.692821 kernel: audit: type=1104 audit(1765560060.673:234): pid=1898 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.831474 sshd[1897]: Connection closed by 139.178.89.65 port 54500 Dec 12 17:21:00.831587 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:00.832000 audit[1894]: USER_END pid=1894 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:00.837807 systemd[1]: sshd@5-10.0.6.252:22-139.178.89.65:54500.service: Deactivated successfully. Dec 12 17:21:00.832000 audit[1894]: CRED_DISP pid=1894 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:00.842060 kernel: audit: type=1106 audit(1765560060.832:235): pid=1894 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:00.842128 kernel: audit: type=1104 audit(1765560060.832:236): pid=1894 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:00.842147 kernel: audit: type=1131 audit(1765560060.840:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.6.252:22-139.178.89.65:54500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.6.252:22-139.178.89.65:54500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:00.842492 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:21:00.846519 systemd-logind[1663]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:21:00.847652 systemd-logind[1663]: Removed session 6. Dec 12 17:21:00.995387 systemd[1]: Started sshd@6-10.0.6.252:22-139.178.89.65:54084.service - OpenSSH per-connection server daemon (139.178.89.65:54084). Dec 12 17:21:00.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.6.252:22-139.178.89.65:54084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:01.816000 audit[1930]: USER_ACCT pid=1930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:01.817498 sshd[1930]: Accepted publickey for core from 139.178.89.65 port 54084 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:21:01.817000 audit[1930]: CRED_ACQ pid=1930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:01.817000 audit[1930]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd45d090 a2=3 a3=0 items=0 ppid=1 pid=1930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:01.817000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:21:01.818625 sshd-session[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:01.823301 systemd-logind[1663]: New session 7 of user core. Dec 12 17:21:01.830799 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:21:01.832000 audit[1930]: USER_START pid=1930 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:01.833000 audit[1933]: CRED_ACQ pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:02.130000 audit[1934]: USER_ACCT pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:02.130000 audit[1934]: CRED_REFR pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:02.130998 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:21:02.131245 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:02.132000 audit[1934]: USER_START pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:02.699682 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:21:02.716724 (dockerd)[1954]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:21:03.136958 dockerd[1954]: time="2025-12-12T17:21:03.136837640Z" level=info msg="Starting up" Dec 12 17:21:03.139220 dockerd[1954]: time="2025-12-12T17:21:03.138706640Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:21:03.149351 dockerd[1954]: time="2025-12-12T17:21:03.149311960Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:21:03.197985 dockerd[1954]: time="2025-12-12T17:21:03.197942280Z" level=info msg="Loading containers: start." Dec 12 17:21:03.208436 kernel: Initializing XFRM netlink socket Dec 12 17:21:03.293000 audit[2006]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.293000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffec4300f0 a2=0 a3=0 items=0 ppid=1954 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:21:03.295000 audit[2008]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.295000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd1ef1b60 a2=0 a3=0 items=0 ppid=1954 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:21:03.297000 audit[2010]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.297000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf34a990 a2=0 a3=0 items=0 ppid=1954 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:21:03.299000 audit[2012]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.299000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5cc6dd0 a2=0 a3=0 items=0 ppid=1954 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:21:03.301000 audit[2014]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.301000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5dcaa90 a2=0 a3=0 items=0 ppid=1954 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:21:03.303000 audit[2016]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.303000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe1991750 a2=0 a3=0 items=0 ppid=1954 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:03.304000 audit[2018]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.304000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe7da21e0 a2=0 a3=0 items=0 ppid=1954 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:21:03.306000 audit[2020]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.306000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd7d15a90 a2=0 a3=0 items=0 ppid=1954 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:21:03.334000 audit[2023]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.334000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffeb2b9e60 a2=0 a3=0 items=0 ppid=1954 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:21:03.336000 audit[2025]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.336000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffedfd9890 a2=0 a3=0 items=0 ppid=1954 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:21:03.338000 audit[2027]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.338000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffeba3a9c0 a2=0 a3=0 items=0 ppid=1954 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:21:03.341000 audit[2029]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.341000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc09461d0 a2=0 a3=0 items=0 ppid=1954 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:03.343000 audit[2031]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.343000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd3d69280 a2=0 a3=0 items=0 ppid=1954 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:21:03.377000 audit[2061]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.377000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffefd91a90 a2=0 a3=0 items=0 ppid=1954 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.377000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:21:03.379000 audit[2063]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.379000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffffa16b40 a2=0 a3=0 items=0 ppid=1954 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:21:03.381000 audit[2065]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.381000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe54533f0 a2=0 a3=0 items=0 ppid=1954 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:21:03.383000 audit[2067]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.383000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe42a2fe0 a2=0 a3=0 items=0 ppid=1954 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:21:03.385000 audit[2069]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.385000 audit[2069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd01e3b10 a2=0 a3=0 items=0 ppid=1954 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:21:03.387000 audit[2071]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.387000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2bf0560 a2=0 a3=0 items=0 ppid=1954 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.387000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:03.389000 audit[2073]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.389000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeb88a310 a2=0 a3=0 items=0 ppid=1954 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:21:03.390000 audit[2075]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.390000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffa5e09e0 a2=0 a3=0 items=0 ppid=1954 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:21:03.393000 audit[2077]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.393000 audit[2077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd04955c0 a2=0 a3=0 items=0 ppid=1954 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:21:03.394000 audit[2079]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.394000 audit[2079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcbf55080 a2=0 a3=0 items=0 ppid=1954 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.394000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:21:03.396000 audit[2081]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.396000 audit[2081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd523b770 a2=0 a3=0 items=0 ppid=1954 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:21:03.398000 audit[2083]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.398000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffed14ad10 a2=0 a3=0 items=0 ppid=1954 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:03.400000 audit[2085]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.400000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffebe83c20 a2=0 a3=0 items=0 ppid=1954 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:21:03.405000 audit[2090]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.405000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb875160 a2=0 a3=0 items=0 ppid=1954 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:21:03.407000 audit[2092]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.407000 audit[2092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff15af410 a2=0 a3=0 items=0 ppid=1954 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:21:03.409000 audit[2094]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.409000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdfb3ab90 a2=0 a3=0 items=0 ppid=1954 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:21:03.410000 audit[2096]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.410000 audit[2096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9dd6300 a2=0 a3=0 items=0 ppid=1954 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:21:03.412000 audit[2098]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.412000 audit[2098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd14b3a10 a2=0 a3=0 items=0 ppid=1954 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:21:03.415000 audit[2100]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:03.415000 audit[2100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff3833a90 a2=0 a3=0 items=0 ppid=1954 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:21:03.449000 audit[2105]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.449000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc8261200 a2=0 a3=0 items=0 ppid=1954 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:21:03.451000 audit[2107]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.451000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe1264830 a2=0 a3=0 items=0 ppid=1954 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:21:03.458000 audit[2115]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.458000 audit[2115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffe25b14a0 a2=0 a3=0 items=0 ppid=1954 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:21:03.468000 audit[2121]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.468000 audit[2121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd165b4b0 a2=0 a3=0 items=0 ppid=1954 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:21:03.470000 audit[2123]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.470000 audit[2123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd7e08850 a2=0 a3=0 items=0 ppid=1954 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:21:03.472000 audit[2125]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.472000 audit[2125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe2022b50 a2=0 a3=0 items=0 ppid=1954 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:21:03.474000 audit[2127]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.474000 audit[2127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc34949f0 a2=0 a3=0 items=0 ppid=1954 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:21:03.476000 audit[2129]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:03.476000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffede405c0 a2=0 a3=0 items=0 ppid=1954 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:03.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:21:03.477462 systemd-networkd[1601]: docker0: Link UP Dec 12 17:21:03.482983 dockerd[1954]: time="2025-12-12T17:21:03.482919720Z" level=info msg="Loading containers: done." Dec 12 17:21:03.526594 dockerd[1954]: time="2025-12-12T17:21:03.526536680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:21:03.526764 dockerd[1954]: time="2025-12-12T17:21:03.526629080Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:21:03.526821 dockerd[1954]: time="2025-12-12T17:21:03.526803080Z" level=info msg="Initializing buildkit" Dec 12 17:21:03.549817 dockerd[1954]: time="2025-12-12T17:21:03.549767480Z" level=info msg="Completed buildkit initialization" Dec 12 17:21:03.555471 dockerd[1954]: time="2025-12-12T17:21:03.555418120Z" level=info msg="Daemon has completed initialization" Dec 12 17:21:03.555695 dockerd[1954]: time="2025-12-12T17:21:03.555496800Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:21:03.555757 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:21:03.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:05.283157 containerd[1697]: time="2025-12-12T17:21:05.283109720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:21:06.054250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:21:06.055914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:06.063611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355476267.mount: Deactivated successfully. Dec 12 17:21:06.522912 chronyd[1650]: Selected source PHC0 Dec 12 17:21:06.755401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:06.759430 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:21:06.759540 kernel: audit: type=1130 audit(1765560066.755:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:06.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:06.769743 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:21:07.090410 kubelet[2192]: E1212 17:21:07.090335 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:21:07.093040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:21:07.093193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:21:07.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:21:07.093746 systemd[1]: kubelet.service: Consumed 149ms CPU time, 107.7M memory peak. Dec 12 17:21:07.097485 kernel: audit: type=1131 audit(1765560067.092:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:21:07.966581 containerd[1697]: time="2025-12-12T17:21:07.966518613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:07.968692 containerd[1697]: time="2025-12-12T17:21:07.968630553Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835833" Dec 12 17:21:07.969507 containerd[1697]: time="2025-12-12T17:21:07.969469665Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:07.974104 containerd[1697]: time="2025-12-12T17:21:07.974011223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:07.975476 containerd[1697]: time="2025-12-12T17:21:07.975435649Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.692279849s" Dec 12 17:21:07.975476 containerd[1697]: time="2025-12-12T17:21:07.975471129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:21:07.976423 containerd[1697]: time="2025-12-12T17:21:07.976213842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:21:10.084158 containerd[1697]: time="2025-12-12T17:21:10.084057184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:10.085442 containerd[1697]: time="2025-12-12T17:21:10.085374228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 12 17:21:10.086256 containerd[1697]: time="2025-12-12T17:21:10.086226870Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:10.089742 containerd[1697]: time="2025-12-12T17:21:10.089690039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:10.090831 containerd[1697]: time="2025-12-12T17:21:10.090538521Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 2.114290279s" Dec 12 17:21:10.090831 containerd[1697]: time="2025-12-12T17:21:10.090568361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:21:10.091020 containerd[1697]: time="2025-12-12T17:21:10.090969322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:21:11.548430 containerd[1697]: time="2025-12-12T17:21:11.548271628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:11.550410 containerd[1697]: time="2025-12-12T17:21:11.550355553Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 12 17:21:11.551966 containerd[1697]: time="2025-12-12T17:21:11.551942277Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:11.555519 containerd[1697]: time="2025-12-12T17:21:11.555487127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:11.556580 containerd[1697]: time="2025-12-12T17:21:11.556486049Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.465488087s" Dec 12 17:21:11.556580 containerd[1697]: time="2025-12-12T17:21:11.556520929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:21:11.556958 containerd[1697]: time="2025-12-12T17:21:11.556936690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:21:12.832526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932355962.mount: Deactivated successfully. Dec 12 17:21:13.091780 containerd[1697]: time="2025-12-12T17:21:13.091673557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:13.092829 containerd[1697]: time="2025-12-12T17:21:13.092762960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27558078" Dec 12 17:21:13.093624 containerd[1697]: time="2025-12-12T17:21:13.093595322Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:13.095926 containerd[1697]: time="2025-12-12T17:21:13.095885528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:13.096560 containerd[1697]: time="2025-12-12T17:21:13.096533170Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.5395644s" Dec 12 17:21:13.096603 containerd[1697]: time="2025-12-12T17:21:13.096568210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:21:13.097001 containerd[1697]: time="2025-12-12T17:21:13.096969731Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:21:13.866483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009451825.mount: Deactivated successfully. Dec 12 17:21:14.549766 containerd[1697]: time="2025-12-12T17:21:14.549709580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:14.551659 containerd[1697]: time="2025-12-12T17:21:14.551600544Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16078392" Dec 12 17:21:14.553501 containerd[1697]: time="2025-12-12T17:21:14.553460869Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:14.557434 containerd[1697]: time="2025-12-12T17:21:14.557341438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:14.558792 containerd[1697]: time="2025-12-12T17:21:14.558753281Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.46174863s" Dec 12 17:21:14.559007 containerd[1697]: time="2025-12-12T17:21:14.558892321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:21:14.559448 containerd[1697]: time="2025-12-12T17:21:14.559392323Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:21:15.103625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332723044.mount: Deactivated successfully. Dec 12 17:21:15.110850 containerd[1697]: time="2025-12-12T17:21:15.110802171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:21:15.111522 containerd[1697]: time="2025-12-12T17:21:15.111469013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:21:15.112503 containerd[1697]: time="2025-12-12T17:21:15.112455535Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:21:15.114561 containerd[1697]: time="2025-12-12T17:21:15.114532140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:21:15.115445 containerd[1697]: time="2025-12-12T17:21:15.115106461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 555.661578ms" Dec 12 17:21:15.115445 containerd[1697]: time="2025-12-12T17:21:15.115137461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:21:15.115645 containerd[1697]: time="2025-12-12T17:21:15.115604542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:21:15.874832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439317585.mount: Deactivated successfully. Dec 12 17:21:17.343691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:21:17.345877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:18.106153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:18.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:18.109421 kernel: audit: type=1130 audit(1765560078.104:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:18.111249 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:21:18.151193 kubelet[2377]: E1212 17:21:18.151129 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:21:18.154169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:21:18.154295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:21:18.154820 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106.5M memory peak. Dec 12 17:21:18.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:21:18.158457 kernel: audit: type=1131 audit(1765560078.153:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:21:18.216760 containerd[1697]: time="2025-12-12T17:21:18.216711689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:18.218549 containerd[1697]: time="2025-12-12T17:21:18.218500734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Dec 12 17:21:18.219543 containerd[1697]: time="2025-12-12T17:21:18.219473576Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:18.223024 containerd[1697]: time="2025-12-12T17:21:18.222974945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:18.223986 containerd[1697]: time="2025-12-12T17:21:18.223957668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.108320926s" Dec 12 17:21:18.224038 containerd[1697]: time="2025-12-12T17:21:18.223987108Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:21:23.390831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:23.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:23.390995 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106.5M memory peak. Dec 12 17:21:23.393624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:23.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:23.396492 kernel: audit: type=1130 audit(1765560083.389:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:23.396584 kernel: audit: type=1131 audit(1765560083.389:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:23.417809 systemd[1]: Reload requested from client PID 2417 ('systemctl') (unit session-7.scope)... Dec 12 17:21:23.417830 systemd[1]: Reloading... Dec 12 17:21:23.504480 zram_generator::config[2465]: No configuration found. Dec 12 17:21:23.676511 systemd[1]: Reloading finished in 258 ms. Dec 12 17:21:23.704000 audit: BPF prog-id=63 op=LOAD Dec 12 17:21:23.704000 audit: BPF prog-id=64 op=LOAD Dec 12 17:21:23.707720 kernel: audit: type=1334 audit(1765560083.704:294): prog-id=63 op=LOAD Dec 12 17:21:23.707787 kernel: audit: type=1334 audit(1765560083.704:295): prog-id=64 op=LOAD Dec 12 17:21:23.707836 kernel: audit: type=1334 audit(1765560083.704:296): prog-id=46 op=UNLOAD Dec 12 17:21:23.704000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:21:23.704000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:21:23.706000 audit: BPF prog-id=65 op=LOAD Dec 12 17:21:23.706000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:21:23.707000 audit: BPF prog-id=66 op=LOAD Dec 12 17:21:23.709415 kernel: audit: type=1334 audit(1765560083.704:297): prog-id=47 op=UNLOAD Dec 12 17:21:23.709456 kernel: audit: type=1334 audit(1765560083.706:298): prog-id=65 op=LOAD Dec 12 17:21:23.709482 kernel: audit: type=1334 audit(1765560083.706:299): prog-id=60 op=UNLOAD Dec 12 17:21:23.709503 kernel: audit: type=1334 audit(1765560083.707:300): prog-id=66 op=LOAD Dec 12 17:21:23.709520 kernel: audit: type=1334 audit(1765560083.708:301): prog-id=67 op=LOAD Dec 12 17:21:23.708000 audit: BPF prog-id=67 op=LOAD Dec 12 17:21:23.708000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:21:23.708000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:21:23.709000 audit: BPF prog-id=68 op=LOAD Dec 12 17:21:23.709000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:21:23.710000 audit: BPF prog-id=69 op=LOAD Dec 12 17:21:23.728000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:21:23.728000 audit: BPF prog-id=70 op=LOAD Dec 12 17:21:23.728000 audit: BPF prog-id=71 op=LOAD Dec 12 17:21:23.728000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:21:23.728000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:21:23.729000 audit: BPF prog-id=72 op=LOAD Dec 12 17:21:23.729000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:21:23.729000 audit: BPF prog-id=73 op=LOAD Dec 12 17:21:23.730000 audit: BPF prog-id=74 op=LOAD Dec 12 17:21:23.730000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:21:23.730000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:21:23.730000 audit: BPF prog-id=75 op=LOAD Dec 12 17:21:23.730000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:21:23.730000 audit: BPF prog-id=76 op=LOAD Dec 12 17:21:23.730000 audit: BPF prog-id=77 op=LOAD Dec 12 17:21:23.730000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:21:23.730000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:21:23.731000 audit: BPF prog-id=78 op=LOAD Dec 12 17:21:23.731000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:21:23.732000 audit: BPF prog-id=79 op=LOAD Dec 12 17:21:23.732000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:21:23.732000 audit: BPF prog-id=80 op=LOAD Dec 12 17:21:23.732000 audit: BPF prog-id=81 op=LOAD Dec 12 17:21:23.732000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:21:23.732000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:21:23.733000 audit: BPF prog-id=82 op=LOAD Dec 12 17:21:23.733000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:21:23.752503 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:21:23.752589 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:21:23.752881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:23.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:21:23.752945 systemd[1]: kubelet.service: Consumed 100ms CPU time, 95.1M memory peak. Dec 12 17:21:23.754556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:24.431255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:24.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:24.440738 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:21:24.475615 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:21:24.475615 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:21:24.475615 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:21:24.475977 kubelet[2510]: I1212 17:21:24.475674 2510 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:21:25.249482 kubelet[2510]: I1212 17:21:25.249432 2510 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:21:25.249482 kubelet[2510]: I1212 17:21:25.249468 2510 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:21:25.249729 kubelet[2510]: I1212 17:21:25.249721 2510 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:21:25.340906 kubelet[2510]: E1212 17:21:25.340839 2510 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.6.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.6.252:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:21:25.345738 kubelet[2510]: I1212 17:21:25.345709 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:21:25.354698 kubelet[2510]: I1212 17:21:25.354674 2510 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:21:25.357523 kubelet[2510]: I1212 17:21:25.357495 2510 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:21:25.358569 kubelet[2510]: I1212 17:21:25.358520 2510 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:21:25.358730 kubelet[2510]: I1212 17:21:25.358558 2510 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-8-acd31a5336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:21:25.358836 kubelet[2510]: I1212 17:21:25.358824 2510 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:21:25.358836 kubelet[2510]: I1212 17:21:25.358835 2510 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:21:25.359079 kubelet[2510]: I1212 17:21:25.359048 2510 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:21:25.363599 kubelet[2510]: I1212 17:21:25.363554 2510 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:21:25.363599 kubelet[2510]: I1212 17:21:25.363587 2510 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:21:25.363745 kubelet[2510]: I1212 17:21:25.363614 2510 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:21:25.363745 kubelet[2510]: I1212 17:21:25.363626 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:21:25.367098 kubelet[2510]: I1212 17:21:25.366984 2510 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:21:25.368372 kubelet[2510]: I1212 17:21:25.367621 2510 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:21:25.368372 kubelet[2510]: W1212 17:21:25.367776 2510 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:21:25.368707 kubelet[2510]: I1212 17:21:25.368684 2510 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:21:25.368742 kubelet[2510]: I1212 17:21:25.368720 2510 server.go:1287] "Started kubelet" Dec 12 17:21:25.369478 kubelet[2510]: W1212 17:21:25.369093 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.6.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.6.252:6443: connect: connection refused Dec 12 17:21:25.369478 kubelet[2510]: E1212 17:21:25.369178 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.6.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.6.252:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:21:25.369596 kubelet[2510]: W1212 17:21:25.369433 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.6.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-8-acd31a5336&limit=500&resourceVersion=0": dial tcp 10.0.6.252:6443: connect: connection refused Dec 12 17:21:25.369676 kubelet[2510]: E1212 17:21:25.369658 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.6.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-8-acd31a5336&limit=500&resourceVersion=0\": dial tcp 10.0.6.252:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:21:25.370896 kubelet[2510]: I1212 17:21:25.370200 2510 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:21:25.371418 kubelet[2510]: I1212 17:21:25.371207 2510 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:21:25.374614 kubelet[2510]: I1212 17:21:25.373820 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:21:25.375290 kubelet[2510]: I1212 17:21:25.375268 2510 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:21:25.376041 kubelet[2510]: I1212 17:21:25.375769 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:21:25.376198 kubelet[2510]: I1212 17:21:25.376166 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:21:25.378000 audit[2524]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.378000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd1496c70 a2=0 a3=0 items=0 ppid=2510 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:21:25.380466 kubelet[2510]: E1212 17:21:25.380164 2510 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-8-acd31a5336\" not found" Dec 12 17:21:25.380466 kubelet[2510]: I1212 17:21:25.380210 2510 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:21:25.380466 kubelet[2510]: E1212 17:21:25.379907 2510 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.6.252:6443/api/v1/namespaces/default/events\": dial tcp 10.0.6.252:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-8-acd31a5336.1880878b16028cbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-8-acd31a5336,UID:ci-4515-1-0-8-acd31a5336,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-8-acd31a5336,},FirstTimestamp:2025-12-12 17:21:25.368704189 +0000 UTC m=+0.924994125,LastTimestamp:2025-12-12 17:21:25.368704189 +0000 UTC m=+0.924994125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-8-acd31a5336,}" Dec 12 17:21:25.380466 kubelet[2510]: I1212 17:21:25.380393 2510 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:21:25.380708 kubelet[2510]: I1212 17:21:25.380692 2510 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:21:25.381186 kubelet[2510]: E1212 17:21:25.381152 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-8-acd31a5336?timeout=10s\": dial tcp 10.0.6.252:6443: connect: connection refused" interval="200ms" Dec 12 17:21:25.379000 audit[2525]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.379000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8c6afa0 a2=0 a3=0 items=0 ppid=2510 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:21:25.381541 kubelet[2510]: W1212 17:21:25.381198 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.6.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.6.252:6443: connect: connection refused Dec 12 17:21:25.381541 kubelet[2510]: E1212 17:21:25.381271 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.6.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.6.252:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:21:25.382111 kubelet[2510]: I1212 17:21:25.382007 2510 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:21:25.382317 kubelet[2510]: I1212 17:21:25.382274 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:21:25.382794 kubelet[2510]: E1212 17:21:25.382746 2510 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:21:25.383483 kubelet[2510]: I1212 17:21:25.383460 2510 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:21:25.382000 audit[2527]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.382000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcf9c4010 a2=0 a3=0 items=0 ppid=2510 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:21:25.384000 audit[2529]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.384000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc37567e0 a2=0 a3=0 items=0 ppid=2510 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:21:25.393000 audit[2535]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.393000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd0f97560 a2=0 a3=0 items=0 ppid=2510 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:21:25.395458 kubelet[2510]: I1212 17:21:25.395423 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:21:25.395781 kubelet[2510]: I1212 17:21:25.395692 2510 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:21:25.395781 kubelet[2510]: I1212 17:21:25.395712 2510 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:21:25.395781 kubelet[2510]: I1212 17:21:25.395730 2510 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:21:25.394000 audit[2536]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:25.394000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd4780290 a2=0 a3=0 items=0 ppid=2510 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:21:25.396951 kubelet[2510]: I1212 17:21:25.396922 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:21:25.396990 kubelet[2510]: I1212 17:21:25.396965 2510 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:21:25.397016 kubelet[2510]: I1212 17:21:25.396997 2510 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:21:25.397016 kubelet[2510]: I1212 17:21:25.397003 2510 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:21:25.397216 kubelet[2510]: E1212 17:21:25.397193 2510 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:21:25.395000 audit[2537]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.395000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbfc8720 a2=0 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:21:25.396000 audit[2538]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:25.396000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1eab5b0 a2=0 a3=0 items=0 ppid=2510 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:21:25.397000 audit[2539]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.397000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4927f80 a2=0 a3=0 items=0 ppid=2510 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:21:25.397000 audit[2540]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:25.397000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea99d500 a2=0 a3=0 items=0 ppid=2510 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:21:25.399306 kubelet[2510]: I1212 17:21:25.399249 2510 policy_none.go:49] "None policy: Start" Dec 12 17:21:25.399306 kubelet[2510]: I1212 17:21:25.399271 2510 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:21:25.399306 kubelet[2510]: I1212 17:21:25.399282 2510 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:21:25.399914 kubelet[2510]: W1212 17:21:25.399864 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.6.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.6.252:6443: connect: connection refused Dec 12 17:21:25.399977 kubelet[2510]: E1212 17:21:25.399926 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.6.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.6.252:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:21:25.398000 audit[2541]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:25.398000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbf5e380 a2=0 a3=0 items=0 ppid=2510 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:21:25.400000 audit[2542]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:25.400000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1e599d0 a2=0 a3=0 items=0 ppid=2510 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:25.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:21:25.404717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:21:25.419377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:21:25.422778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:21:25.441848 kubelet[2510]: I1212 17:21:25.441566 2510 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:21:25.441848 kubelet[2510]: I1212 17:21:25.441778 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:21:25.441848 kubelet[2510]: I1212 17:21:25.441790 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:21:25.442386 kubelet[2510]: I1212 17:21:25.442350 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:21:25.443373 kubelet[2510]: E1212 17:21:25.443349 2510 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:21:25.443457 kubelet[2510]: E1212 17:21:25.443412 2510 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-8-acd31a5336\" not found" Dec 12 17:21:25.507505 systemd[1]: Created slice kubepods-burstable-poda3b2bee5dd740c3a72ea0bffa8a511a7.slice - libcontainer container kubepods-burstable-poda3b2bee5dd740c3a72ea0bffa8a511a7.slice. Dec 12 17:21:25.523617 kubelet[2510]: E1212 17:21:25.523578 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.526327 systemd[1]: Created slice kubepods-burstable-podcbe33c8ea6d13a382319a5d470e7c7e8.slice - libcontainer container kubepods-burstable-podcbe33c8ea6d13a382319a5d470e7c7e8.slice. Dec 12 17:21:25.528132 kubelet[2510]: E1212 17:21:25.528105 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.531245 systemd[1]: Created slice kubepods-burstable-pod5358502a0ceaf93a281a13fa9644339e.slice - libcontainer container kubepods-burstable-pod5358502a0ceaf93a281a13fa9644339e.slice. Dec 12 17:21:25.532842 kubelet[2510]: E1212 17:21:25.532817 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.544490 kubelet[2510]: I1212 17:21:25.544463 2510 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.545036 kubelet[2510]: E1212 17:21:25.545005 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.252:6443/api/v1/nodes\": dial tcp 10.0.6.252:6443: connect: connection refused" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.581887 kubelet[2510]: I1212 17:21:25.581667 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.581887 kubelet[2510]: I1212 17:21:25.581764 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.581887 kubelet[2510]: I1212 17:21:25.581785 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.581887 kubelet[2510]: I1212 17:21:25.581804 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.581887 kubelet[2510]: I1212 17:21:25.581822 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5358502a0ceaf93a281a13fa9644339e-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-8-acd31a5336\" (UID: \"5358502a0ceaf93a281a13fa9644339e\") " pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.582109 kubelet[2510]: I1212 17:21:25.581839 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.582109 kubelet[2510]: I1212 17:21:25.581853 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.582109 kubelet[2510]: I1212 17:21:25.581867 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.582109 kubelet[2510]: I1212 17:21:25.581882 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.582109 kubelet[2510]: E1212 17:21:25.581896 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-8-acd31a5336?timeout=10s\": dial tcp 10.0.6.252:6443: connect: connection refused" interval="400ms" Dec 12 17:21:25.747196 kubelet[2510]: I1212 17:21:25.747158 2510 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.747571 kubelet[2510]: E1212 17:21:25.747525 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.252:6443/api/v1/nodes\": dial tcp 10.0.6.252:6443: connect: connection refused" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:25.825361 containerd[1697]: time="2025-12-12T17:21:25.825246815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-8-acd31a5336,Uid:a3b2bee5dd740c3a72ea0bffa8a511a7,Namespace:kube-system,Attempt:0,}" Dec 12 17:21:25.829933 containerd[1697]: time="2025-12-12T17:21:25.829881827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-8-acd31a5336,Uid:cbe33c8ea6d13a382319a5d470e7c7e8,Namespace:kube-system,Attempt:0,}" Dec 12 17:21:25.833514 containerd[1697]: time="2025-12-12T17:21:25.833472556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-8-acd31a5336,Uid:5358502a0ceaf93a281a13fa9644339e,Namespace:kube-system,Attempt:0,}" Dec 12 17:21:25.984052 kubelet[2510]: E1212 17:21:25.984007 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-8-acd31a5336?timeout=10s\": dial tcp 10.0.6.252:6443: connect: connection refused" interval="800ms" Dec 12 17:21:26.087123 containerd[1697]: time="2025-12-12T17:21:26.086967975Z" level=info msg="connecting to shim f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05" address="unix:///run/containerd/s/b6f4576bac7c3c7178e273037d4ae9eba096bd6e22abc0e09aecc32cbabbcb7a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:26.094912 containerd[1697]: time="2025-12-12T17:21:26.093224871Z" level=info msg="connecting to shim d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5" address="unix:///run/containerd/s/1de0eec9f9e7735bf4dd81ffa59875b334496fdefa5a36830d36f6d64ec09637" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:26.094912 containerd[1697]: time="2025-12-12T17:21:26.093931913Z" level=info msg="connecting to shim 0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96" address="unix:///run/containerd/s/6a339dcefc961c0c352be0e9d20f78982844ad055521d915f3959269dd56b5e3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:26.115614 systemd[1]: Started cri-containerd-f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05.scope - libcontainer container f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05. Dec 12 17:21:26.119706 systemd[1]: Started cri-containerd-0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96.scope - libcontainer container 0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96. Dec 12 17:21:26.121502 systemd[1]: Started cri-containerd-d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5.scope - libcontainer container d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5. Dec 12 17:21:26.128000 audit: BPF prog-id=83 op=LOAD Dec 12 17:21:26.129000 audit: BPF prog-id=84 op=LOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=85 op=LOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=86 op=LOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.129000 audit: BPF prog-id=87 op=LOAD Dec 12 17:21:26.129000 audit[2585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2551 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636643064643830623266663737363630646639663437343839636663 Dec 12 17:21:26.132000 audit: BPF prog-id=88 op=LOAD Dec 12 17:21:26.133000 audit: BPF prog-id=89 op=LOAD Dec 12 17:21:26.133000 audit: BPF prog-id=90 op=LOAD Dec 12 17:21:26.133000 audit[2605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=90 op=UNLOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=91 op=LOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=92 op=LOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.134000 audit: BPF prog-id=93 op=LOAD Dec 12 17:21:26.134000 audit[2605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2579 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033363262616238643833643633626362343137613239663933363061 Dec 12 17:21:26.135000 audit: BPF prog-id=94 op=LOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=95 op=LOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=96 op=LOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.135000 audit: BPF prog-id=97 op=LOAD Dec 12 17:21:26.135000 audit[2614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430353663613138366330333263363232653737666333383966663535 Dec 12 17:21:26.150260 kubelet[2510]: I1212 17:21:26.150213 2510 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:26.150703 kubelet[2510]: E1212 17:21:26.150663 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.252:6443/api/v1/nodes\": dial tcp 10.0.6.252:6443: connect: connection refused" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:26.168195 containerd[1697]: time="2025-12-12T17:21:26.168131306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-8-acd31a5336,Uid:a3b2bee5dd740c3a72ea0bffa8a511a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5\"" Dec 12 17:21:26.171821 containerd[1697]: time="2025-12-12T17:21:26.171698275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-8-acd31a5336,Uid:5358502a0ceaf93a281a13fa9644339e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05\"" Dec 12 17:21:26.171821 containerd[1697]: time="2025-12-12T17:21:26.171741555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-8-acd31a5336,Uid:cbe33c8ea6d13a382319a5d470e7c7e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96\"" Dec 12 17:21:26.171959 containerd[1697]: time="2025-12-12T17:21:26.171922355Z" level=info msg="CreateContainer within sandbox \"d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:21:26.173566 containerd[1697]: time="2025-12-12T17:21:26.173531480Z" level=info msg="CreateContainer within sandbox \"0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:21:26.174463 containerd[1697]: time="2025-12-12T17:21:26.173962921Z" level=info msg="CreateContainer within sandbox \"f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:21:26.182174 containerd[1697]: time="2025-12-12T17:21:26.182142102Z" level=info msg="Container c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:26.193714 containerd[1697]: time="2025-12-12T17:21:26.193659972Z" level=info msg="CreateContainer within sandbox \"d056ca186c032c622e77fc389ff550050dd7f53ad26d1a6f0a7c621267cddbd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584\"" Dec 12 17:21:26.194426 containerd[1697]: time="2025-12-12T17:21:26.194387614Z" level=info msg="StartContainer for \"c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584\"" Dec 12 17:21:26.194655 containerd[1697]: time="2025-12-12T17:21:26.194611454Z" level=info msg="Container 6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:26.195583 containerd[1697]: time="2025-12-12T17:21:26.195555617Z" level=info msg="connecting to shim c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584" address="unix:///run/containerd/s/1de0eec9f9e7735bf4dd81ffa59875b334496fdefa5a36830d36f6d64ec09637" protocol=ttrpc version=3 Dec 12 17:21:26.198101 containerd[1697]: time="2025-12-12T17:21:26.198063663Z" level=info msg="Container 9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:26.204424 containerd[1697]: time="2025-12-12T17:21:26.204354200Z" level=info msg="CreateContainer within sandbox \"0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15\"" Dec 12 17:21:26.205020 containerd[1697]: time="2025-12-12T17:21:26.204983241Z" level=info msg="StartContainer for \"6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15\"" Dec 12 17:21:26.206194 containerd[1697]: time="2025-12-12T17:21:26.206140404Z" level=info msg="connecting to shim 6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15" address="unix:///run/containerd/s/6a339dcefc961c0c352be0e9d20f78982844ad055521d915f3959269dd56b5e3" protocol=ttrpc version=3 Dec 12 17:21:26.213681 containerd[1697]: time="2025-12-12T17:21:26.213640984Z" level=info msg="CreateContainer within sandbox \"f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9\"" Dec 12 17:21:26.214114 containerd[1697]: time="2025-12-12T17:21:26.214091905Z" level=info msg="StartContainer for \"9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9\"" Dec 12 17:21:26.215186 containerd[1697]: time="2025-12-12T17:21:26.215129268Z" level=info msg="connecting to shim 9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9" address="unix:///run/containerd/s/b6f4576bac7c3c7178e273037d4ae9eba096bd6e22abc0e09aecc32cbabbcb7a" protocol=ttrpc version=3 Dec 12 17:21:26.216712 systemd[1]: Started cri-containerd-c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584.scope - libcontainer container c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584. Dec 12 17:21:26.244728 systemd[1]: Started cri-containerd-6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15.scope - libcontainer container 6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15. Dec 12 17:21:26.245910 systemd[1]: Started cri-containerd-9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9.scope - libcontainer container 9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9. Dec 12 17:21:26.246000 audit: BPF prog-id=98 op=LOAD Dec 12 17:21:26.248000 audit: BPF prog-id=99 op=LOAD Dec 12 17:21:26.248000 audit[2680]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.248000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:21:26.248000 audit[2680]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.249000 audit: BPF prog-id=100 op=LOAD Dec 12 17:21:26.249000 audit[2680]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.250000 audit: BPF prog-id=101 op=LOAD Dec 12 17:21:26.250000 audit[2680]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.250000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:21:26.250000 audit[2680]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.250000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:21:26.250000 audit[2680]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.250000 audit: BPF prog-id=102 op=LOAD Dec 12 17:21:26.250000 audit[2680]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626433653266313334353135353265353338303332386137356137 Dec 12 17:21:26.256000 audit: BPF prog-id=103 op=LOAD Dec 12 17:21:26.257000 audit: BPF prog-id=104 op=LOAD Dec 12 17:21:26.257000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.257000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:21:26.257000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.257000 audit: BPF prog-id=105 op=LOAD Dec 12 17:21:26.257000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.257000 audit: BPF prog-id=106 op=LOAD Dec 12 17:21:26.257000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.258000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:21:26.258000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.258000 audit: BPF prog-id=105 op=UNLOAD Dec 12 17:21:26.258000 audit[2697]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.258000 audit: BPF prog-id=107 op=LOAD Dec 12 17:21:26.258000 audit[2697]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2551 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965366238343265303432653263366537393261613536363134316630 Dec 12 17:21:26.262000 audit: BPF prog-id=108 op=LOAD Dec 12 17:21:26.263000 audit: BPF prog-id=109 op=LOAD Dec 12 17:21:26.263000 audit[2691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.263000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:21:26.263000 audit[2691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.263000 audit: BPF prog-id=110 op=LOAD Dec 12 17:21:26.263000 audit[2691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.263000 audit: BPF prog-id=111 op=LOAD Dec 12 17:21:26.263000 audit[2691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.264000 audit: BPF prog-id=111 op=UNLOAD Dec 12 17:21:26.264000 audit[2691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.264000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:21:26.264000 audit[2691]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.264000 audit: BPF prog-id=112 op=LOAD Dec 12 17:21:26.264000 audit[2691]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2579 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:26.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661643937336638323535323632306430636337633934393766373430 Dec 12 17:21:26.289165 containerd[1697]: time="2025-12-12T17:21:26.289023820Z" level=info msg="StartContainer for \"c2bd3e2f13451552e5380328a75a7854d88d7566e8333a39fd1d6302fee8d584\" returns successfully" Dec 12 17:21:26.295628 containerd[1697]: time="2025-12-12T17:21:26.294282153Z" level=info msg="StartContainer for \"9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9\" returns successfully" Dec 12 17:21:26.303500 containerd[1697]: time="2025-12-12T17:21:26.303318017Z" level=info msg="StartContainer for \"6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15\" returns successfully" Dec 12 17:21:26.406965 kubelet[2510]: E1212 17:21:26.406925 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:26.411671 kubelet[2510]: E1212 17:21:26.411645 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:26.414443 kubelet[2510]: E1212 17:21:26.414423 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:26.953675 kubelet[2510]: I1212 17:21:26.953644 2510 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:27.416051 kubelet[2510]: E1212 17:21:27.416019 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:27.416166 kubelet[2510]: E1212 17:21:27.416107 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.341506 update_engine[1668]: I20251212 17:21:28.341435 1668 update_attempter.cc:509] Updating boot flags... Dec 12 17:21:28.430887 kubelet[2510]: E1212 17:21:28.424705 2510 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.430887 kubelet[2510]: E1212 17:21:28.429152 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-8-acd31a5336\" not found" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.517953 kubelet[2510]: I1212 17:21:28.517915 2510 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.581158 kubelet[2510]: I1212 17:21:28.581117 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.588536 kubelet[2510]: E1212 17:21:28.588490 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-8-acd31a5336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.588536 kubelet[2510]: I1212 17:21:28.588528 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.591335 kubelet[2510]: E1212 17:21:28.591296 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.591654 kubelet[2510]: I1212 17:21:28.591497 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:28.593937 kubelet[2510]: E1212 17:21:28.593895 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:29.369313 kubelet[2510]: I1212 17:21:29.369221 2510 apiserver.go:52] "Watching apiserver" Dec 12 17:21:29.381334 kubelet[2510]: I1212 17:21:29.381289 2510 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:21:30.796853 systemd[1]: Reload requested from client PID 2798 ('systemctl') (unit session-7.scope)... Dec 12 17:21:30.797153 systemd[1]: Reloading... Dec 12 17:21:30.880565 zram_generator::config[2844]: No configuration found. Dec 12 17:21:31.063687 systemd[1]: Reloading finished in 266 ms. Dec 12 17:21:31.100762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:31.116719 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:21:31.117013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:31.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:31.118417 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:21:31.118467 kernel: audit: type=1131 audit(1765560091.115:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:31.118538 systemd[1]: kubelet.service: Consumed 1.262s CPU time, 128.6M memory peak. Dec 12 17:21:31.120452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:31.119000 audit: BPF prog-id=113 op=LOAD Dec 12 17:21:31.119000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:21:31.122659 kernel: audit: type=1334 audit(1765560091.119:397): prog-id=113 op=LOAD Dec 12 17:21:31.122702 kernel: audit: type=1334 audit(1765560091.119:398): prog-id=75 op=UNLOAD Dec 12 17:21:31.122722 kernel: audit: type=1334 audit(1765560091.121:399): prog-id=114 op=LOAD Dec 12 17:21:31.121000 audit: BPF prog-id=114 op=LOAD Dec 12 17:21:31.124425 kernel: audit: type=1334 audit(1765560091.122:400): prog-id=115 op=LOAD Dec 12 17:21:31.124497 kernel: audit: type=1334 audit(1765560091.122:401): prog-id=76 op=UNLOAD Dec 12 17:21:31.122000 audit: BPF prog-id=115 op=LOAD Dec 12 17:21:31.122000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:21:31.126481 kernel: audit: type=1334 audit(1765560091.122:402): prog-id=77 op=UNLOAD Dec 12 17:21:31.126531 kernel: audit: type=1334 audit(1765560091.122:403): prog-id=116 op=LOAD Dec 12 17:21:31.122000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:21:31.122000 audit: BPF prog-id=116 op=LOAD Dec 12 17:21:31.127066 kernel: audit: type=1334 audit(1765560091.122:404): prog-id=82 op=UNLOAD Dec 12 17:21:31.122000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:21:31.123000 audit: BPF prog-id=117 op=LOAD Dec 12 17:21:31.128626 kernel: audit: type=1334 audit(1765560091.123:405): prog-id=117 op=LOAD Dec 12 17:21:31.123000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:21:31.125000 audit: BPF prog-id=118 op=LOAD Dec 12 17:21:31.125000 audit: BPF prog-id=119 op=LOAD Dec 12 17:21:31.125000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:21:31.125000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:21:31.125000 audit: BPF prog-id=120 op=LOAD Dec 12 17:21:31.139000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:21:31.139000 audit: BPF prog-id=121 op=LOAD Dec 12 17:21:31.139000 audit: BPF prog-id=122 op=LOAD Dec 12 17:21:31.139000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:21:31.139000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:21:31.140000 audit: BPF prog-id=123 op=LOAD Dec 12 17:21:31.140000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:21:31.141000 audit: BPF prog-id=124 op=LOAD Dec 12 17:21:31.141000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:21:31.141000 audit: BPF prog-id=125 op=LOAD Dec 12 17:21:31.141000 audit: BPF prog-id=126 op=LOAD Dec 12 17:21:31.141000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:21:31.141000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:21:31.142000 audit: BPF prog-id=127 op=LOAD Dec 12 17:21:31.142000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:21:31.142000 audit: BPF prog-id=128 op=LOAD Dec 12 17:21:31.142000 audit: BPF prog-id=129 op=LOAD Dec 12 17:21:31.142000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:21:31.142000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:21:31.143000 audit: BPF prog-id=130 op=LOAD Dec 12 17:21:31.143000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:21:31.144000 audit: BPF prog-id=131 op=LOAD Dec 12 17:21:31.144000 audit: BPF prog-id=132 op=LOAD Dec 12 17:21:31.144000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:21:31.144000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:21:31.266297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:31.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:31.272693 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:21:31.313184 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:21:31.313184 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:21:31.313184 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:21:31.313184 kubelet[2889]: I1212 17:21:31.312499 2889 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:21:31.466594 kubelet[2889]: I1212 17:21:31.319990 2889 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:21:31.466594 kubelet[2889]: I1212 17:21:31.320016 2889 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:21:31.466594 kubelet[2889]: I1212 17:21:31.320614 2889 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:21:31.467599 kubelet[2889]: I1212 17:21:31.467549 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:21:31.469987 kubelet[2889]: I1212 17:21:31.469902 2889 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:21:31.473763 kubelet[2889]: I1212 17:21:31.473705 2889 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:21:31.476425 kubelet[2889]: I1212 17:21:31.476168 2889 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:21:31.476602 kubelet[2889]: I1212 17:21:31.476546 2889 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:21:31.476772 kubelet[2889]: I1212 17:21:31.476611 2889 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-8-acd31a5336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:21:31.476841 kubelet[2889]: I1212 17:21:31.476781 2889 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:21:31.476841 kubelet[2889]: I1212 17:21:31.476790 2889 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:21:31.476841 kubelet[2889]: I1212 17:21:31.476836 2889 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:21:31.477571 kubelet[2889]: I1212 17:21:31.476974 2889 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:21:31.477571 kubelet[2889]: I1212 17:21:31.476988 2889 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:21:31.477571 kubelet[2889]: I1212 17:21:31.477014 2889 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:21:31.477571 kubelet[2889]: I1212 17:21:31.477024 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:21:31.477754 kubelet[2889]: I1212 17:21:31.477606 2889 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:21:31.478223 kubelet[2889]: I1212 17:21:31.478196 2889 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:21:31.478723 kubelet[2889]: I1212 17:21:31.478642 2889 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:21:31.478723 kubelet[2889]: I1212 17:21:31.478679 2889 server.go:1287] "Started kubelet" Dec 12 17:21:31.480378 kubelet[2889]: I1212 17:21:31.479713 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:21:31.480378 kubelet[2889]: I1212 17:21:31.480069 2889 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:21:31.482604 kubelet[2889]: I1212 17:21:31.482578 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:21:31.488412 kubelet[2889]: I1212 17:21:31.488224 2889 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:21:31.489023 kubelet[2889]: I1212 17:21:31.488914 2889 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:21:31.492405 kubelet[2889]: I1212 17:21:31.492118 2889 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:21:31.493547 kubelet[2889]: I1212 17:21:31.493514 2889 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:21:31.493740 kubelet[2889]: E1212 17:21:31.493719 2889 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-8-acd31a5336\" not found" Dec 12 17:21:31.494470 kubelet[2889]: I1212 17:21:31.494447 2889 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:21:31.494798 kubelet[2889]: I1212 17:21:31.494776 2889 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:21:31.498721 kubelet[2889]: I1212 17:21:31.496608 2889 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:21:31.498721 kubelet[2889]: I1212 17:21:31.496735 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:21:31.503456 kubelet[2889]: I1212 17:21:31.503241 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:21:31.503956 kubelet[2889]: I1212 17:21:31.503906 2889 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:21:31.505327 kubelet[2889]: I1212 17:21:31.505007 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:21:31.505327 kubelet[2889]: I1212 17:21:31.505035 2889 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:21:31.505327 kubelet[2889]: I1212 17:21:31.505112 2889 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:21:31.505327 kubelet[2889]: I1212 17:21:31.505122 2889 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:21:31.505327 kubelet[2889]: E1212 17:21:31.505187 2889 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:21:31.511460 kubelet[2889]: E1212 17:21:31.509700 2889 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:21:31.541168 kubelet[2889]: I1212 17:21:31.541141 2889 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:21:31.541168 kubelet[2889]: I1212 17:21:31.541158 2889 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:21:31.541168 kubelet[2889]: I1212 17:21:31.541180 2889 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:21:31.541346 kubelet[2889]: I1212 17:21:31.541322 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:21:31.541346 kubelet[2889]: I1212 17:21:31.541332 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:21:31.541386 kubelet[2889]: I1212 17:21:31.541350 2889 policy_none.go:49] "None policy: Start" Dec 12 17:21:31.541386 kubelet[2889]: I1212 17:21:31.541358 2889 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:21:31.541386 kubelet[2889]: I1212 17:21:31.541366 2889 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:21:31.541496 kubelet[2889]: I1212 17:21:31.541481 2889 state_mem.go:75] "Updated machine memory state" Dec 12 17:21:31.544788 kubelet[2889]: I1212 17:21:31.544739 2889 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:21:31.544943 kubelet[2889]: I1212 17:21:31.544896 2889 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:21:31.544979 kubelet[2889]: I1212 17:21:31.544941 2889 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:21:31.545575 kubelet[2889]: I1212 17:21:31.545308 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:21:31.546504 kubelet[2889]: E1212 17:21:31.546443 2889 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:21:31.607019 kubelet[2889]: I1212 17:21:31.606965 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.607019 kubelet[2889]: I1212 17:21:31.606991 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.607207 kubelet[2889]: I1212 17:21:31.607184 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.647744 kubelet[2889]: I1212 17:21:31.647714 2889 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.657360 kubelet[2889]: I1212 17:21:31.657328 2889 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.657499 kubelet[2889]: I1212 17:21:31.657432 2889 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696050 kubelet[2889]: I1212 17:21:31.695934 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696050 kubelet[2889]: I1212 17:21:31.695981 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696050 kubelet[2889]: I1212 17:21:31.696003 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696221 kubelet[2889]: I1212 17:21:31.696057 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696221 kubelet[2889]: I1212 17:21:31.696093 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696221 kubelet[2889]: I1212 17:21:31.696124 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b2bee5dd740c3a72ea0bffa8a511a7-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" (UID: \"a3b2bee5dd740c3a72ea0bffa8a511a7\") " pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696221 kubelet[2889]: I1212 17:21:31.696146 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696221 kubelet[2889]: I1212 17:21:31.696165 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbe33c8ea6d13a382319a5d470e7c7e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-8-acd31a5336\" (UID: \"cbe33c8ea6d13a382319a5d470e7c7e8\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:31.696323 kubelet[2889]: I1212 17:21:31.696192 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5358502a0ceaf93a281a13fa9644339e-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-8-acd31a5336\" (UID: \"5358502a0ceaf93a281a13fa9644339e\") " pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:32.478138 kubelet[2889]: I1212 17:21:32.478100 2889 apiserver.go:52] "Watching apiserver" Dec 12 17:21:32.494172 kubelet[2889]: I1212 17:21:32.494124 2889 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:21:32.526855 kubelet[2889]: I1212 17:21:32.526782 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:32.526998 kubelet[2889]: I1212 17:21:32.526887 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:32.536409 kubelet[2889]: E1212 17:21:32.536362 2889 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-8-acd31a5336\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:32.538324 kubelet[2889]: E1212 17:21:32.538297 2889 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-8-acd31a5336\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" Dec 12 17:21:32.546142 kubelet[2889]: I1212 17:21:32.545274 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-8-acd31a5336" podStartSLOduration=1.545241237 podStartE2EDuration="1.545241237s" podCreationTimestamp="2025-12-12 17:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:21:32.545237397 +0000 UTC m=+1.268285258" watchObservedRunningTime="2025-12-12 17:21:32.545241237 +0000 UTC m=+1.268289058" Dec 12 17:21:32.562017 kubelet[2889]: I1212 17:21:32.561963 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-8-acd31a5336" podStartSLOduration=1.56194452 podStartE2EDuration="1.56194452s" podCreationTimestamp="2025-12-12 17:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:21:32.56192508 +0000 UTC m=+1.284972941" watchObservedRunningTime="2025-12-12 17:21:32.56194452 +0000 UTC m=+1.284992381" Dec 12 17:21:32.562282 kubelet[2889]: I1212 17:21:32.562042 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-8-acd31a5336" podStartSLOduration=1.56203804 podStartE2EDuration="1.56203804s" podCreationTimestamp="2025-12-12 17:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:21:32.553741819 +0000 UTC m=+1.276789640" watchObservedRunningTime="2025-12-12 17:21:32.56203804 +0000 UTC m=+1.285085861" Dec 12 17:21:36.399869 kubelet[2889]: I1212 17:21:36.399825 2889 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:21:36.400216 containerd[1697]: time="2025-12-12T17:21:36.400125417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:21:36.400382 kubelet[2889]: I1212 17:21:36.400293 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:21:37.255682 systemd[1]: Created slice kubepods-besteffort-podf70bbd23_9ff9_4206_ab6c_833fa5be17a6.slice - libcontainer container kubepods-besteffort-podf70bbd23_9ff9_4206_ab6c_833fa5be17a6.slice. Dec 12 17:21:37.333428 kubelet[2889]: I1212 17:21:37.333347 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpnjg\" (UniqueName: \"kubernetes.io/projected/f70bbd23-9ff9-4206-ab6c-833fa5be17a6-kube-api-access-gpnjg\") pod \"kube-proxy-cnkf4\" (UID: \"f70bbd23-9ff9-4206-ab6c-833fa5be17a6\") " pod="kube-system/kube-proxy-cnkf4" Dec 12 17:21:37.333428 kubelet[2889]: I1212 17:21:37.333420 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f70bbd23-9ff9-4206-ab6c-833fa5be17a6-xtables-lock\") pod \"kube-proxy-cnkf4\" (UID: \"f70bbd23-9ff9-4206-ab6c-833fa5be17a6\") " pod="kube-system/kube-proxy-cnkf4" Dec 12 17:21:37.333585 kubelet[2889]: I1212 17:21:37.333448 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f70bbd23-9ff9-4206-ab6c-833fa5be17a6-kube-proxy\") pod \"kube-proxy-cnkf4\" (UID: \"f70bbd23-9ff9-4206-ab6c-833fa5be17a6\") " pod="kube-system/kube-proxy-cnkf4" Dec 12 17:21:37.333585 kubelet[2889]: I1212 17:21:37.333464 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f70bbd23-9ff9-4206-ab6c-833fa5be17a6-lib-modules\") pod \"kube-proxy-cnkf4\" (UID: \"f70bbd23-9ff9-4206-ab6c-833fa5be17a6\") " pod="kube-system/kube-proxy-cnkf4" Dec 12 17:21:37.476492 systemd[1]: Created slice kubepods-besteffort-podeb16128d_beb5_452f_bdbb_8bfa8a64a1e0.slice - libcontainer container kubepods-besteffort-podeb16128d_beb5_452f_bdbb_8bfa8a64a1e0.slice. Dec 12 17:21:37.534447 kubelet[2889]: I1212 17:21:37.534215 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb16128d-beb5-452f-bdbb-8bfa8a64a1e0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-b56wz\" (UID: \"eb16128d-beb5-452f-bdbb-8bfa8a64a1e0\") " pod="tigera-operator/tigera-operator-7dcd859c48-b56wz" Dec 12 17:21:37.534447 kubelet[2889]: I1212 17:21:37.534322 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8gl\" (UniqueName: \"kubernetes.io/projected/eb16128d-beb5-452f-bdbb-8bfa8a64a1e0-kube-api-access-gs8gl\") pod \"tigera-operator-7dcd859c48-b56wz\" (UID: \"eb16128d-beb5-452f-bdbb-8bfa8a64a1e0\") " pod="tigera-operator/tigera-operator-7dcd859c48-b56wz" Dec 12 17:21:37.565603 containerd[1697]: time="2025-12-12T17:21:37.565564646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnkf4,Uid:f70bbd23-9ff9-4206-ab6c-833fa5be17a6,Namespace:kube-system,Attempt:0,}" Dec 12 17:21:37.587031 containerd[1697]: time="2025-12-12T17:21:37.586967302Z" level=info msg="connecting to shim 53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a" address="unix:///run/containerd/s/0e8078dc2d913384666b00ac96ab14ec4e563c9cf435a409a600722cc7f808cb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:37.607895 systemd[1]: Started cri-containerd-53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a.scope - libcontainer container 53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a. Dec 12 17:21:37.618112 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:21:37.618239 kernel: audit: type=1334 audit(1765560097.614:438): prog-id=133 op=LOAD Dec 12 17:21:37.614000 audit: BPF prog-id=133 op=LOAD Dec 12 17:21:37.615000 audit: BPF prog-id=134 op=LOAD Dec 12 17:21:37.615000 audit[2959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.624023 kernel: audit: type=1334 audit(1765560097.615:439): prog-id=134 op=LOAD Dec 12 17:21:37.624088 kernel: audit: type=1300 audit(1765560097.615:439): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.628488 kernel: audit: type=1327 audit(1765560097.615:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.628624 kernel: audit: type=1334 audit(1765560097.615:440): prog-id=134 op=UNLOAD Dec 12 17:21:37.615000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:21:37.615000 audit[2959]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.633286 kernel: audit: type=1300 audit(1765560097.615:440): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.636720 kernel: audit: type=1327 audit(1765560097.615:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.615000 audit: BPF prog-id=135 op=LOAD Dec 12 17:21:37.638252 kernel: audit: type=1334 audit(1765560097.615:441): prog-id=135 op=LOAD Dec 12 17:21:37.638319 kernel: audit: type=1300 audit(1765560097.615:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.615000 audit[2959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.644674 kernel: audit: type=1327 audit(1765560097.615:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.616000 audit: BPF prog-id=136 op=LOAD Dec 12 17:21:37.616000 audit[2959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.616000 audit: BPF prog-id=136 op=UNLOAD Dec 12 17:21:37.616000 audit[2959]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.616000 audit: BPF prog-id=135 op=UNLOAD Dec 12 17:21:37.616000 audit[2959]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.616000 audit: BPF prog-id=137 op=LOAD Dec 12 17:21:37.616000 audit[2959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2946 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533393437663564303261303862323832306364346461613430393662 Dec 12 17:21:37.657297 containerd[1697]: time="2025-12-12T17:21:37.657256445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnkf4,Uid:f70bbd23-9ff9-4206-ab6c-833fa5be17a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a\"" Dec 12 17:21:37.660975 containerd[1697]: time="2025-12-12T17:21:37.660648694Z" level=info msg="CreateContainer within sandbox \"53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:21:37.675193 containerd[1697]: time="2025-12-12T17:21:37.674214849Z" level=info msg="Container 69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:37.681537 containerd[1697]: time="2025-12-12T17:21:37.681472108Z" level=info msg="CreateContainer within sandbox \"53947f5d02a08b2820cd4daa4096b6311a0ed69ccc61cb1ac1a625a63bc83f0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87\"" Dec 12 17:21:37.682444 containerd[1697]: time="2025-12-12T17:21:37.682031549Z" level=info msg="StartContainer for \"69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87\"" Dec 12 17:21:37.683982 containerd[1697]: time="2025-12-12T17:21:37.683950634Z" level=info msg="connecting to shim 69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87" address="unix:///run/containerd/s/0e8078dc2d913384666b00ac96ab14ec4e563c9cf435a409a600722cc7f808cb" protocol=ttrpc version=3 Dec 12 17:21:37.702653 systemd[1]: Started cri-containerd-69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87.scope - libcontainer container 69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87. Dec 12 17:21:37.765000 audit: BPF prog-id=138 op=LOAD Dec 12 17:21:37.765000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639666234343364303439373439656237663236643439363964613137 Dec 12 17:21:37.765000 audit: BPF prog-id=139 op=LOAD Dec 12 17:21:37.765000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639666234343364303439373439656237663236643439363964613137 Dec 12 17:21:37.765000 audit: BPF prog-id=139 op=UNLOAD Dec 12 17:21:37.765000 audit[2984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639666234343364303439373439656237663236643439363964613137 Dec 12 17:21:37.765000 audit: BPF prog-id=138 op=UNLOAD Dec 12 17:21:37.765000 audit[2984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639666234343364303439373439656237663236643439363964613137 Dec 12 17:21:37.765000 audit: BPF prog-id=140 op=LOAD Dec 12 17:21:37.765000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639666234343364303439373439656237663236643439363964613137 Dec 12 17:21:37.782539 containerd[1697]: time="2025-12-12T17:21:37.782469770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-b56wz,Uid:eb16128d-beb5-452f-bdbb-8bfa8a64a1e0,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:21:37.783982 containerd[1697]: time="2025-12-12T17:21:37.783946054Z" level=info msg="StartContainer for \"69fb443d049749eb7f26d4969da1788855a35320e72b096b35a7028fbaef9f87\" returns successfully" Dec 12 17:21:37.800718 containerd[1697]: time="2025-12-12T17:21:37.800553457Z" level=info msg="connecting to shim 28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772" address="unix:///run/containerd/s/183e350dc661b62f3d733e7feeae2cc4e06a4d2fe91f83f9b0a392c54089e2c9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:37.825635 systemd[1]: Started cri-containerd-28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772.scope - libcontainer container 28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772. Dec 12 17:21:37.835000 audit: BPF prog-id=141 op=LOAD Dec 12 17:21:37.835000 audit: BPF prog-id=142 op=LOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=143 op=LOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=144 op=LOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=144 op=UNLOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=143 op=UNLOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.835000 audit: BPF prog-id=145 op=LOAD Dec 12 17:21:37.835000 audit[3033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3021 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238613634323935383561376561666164343662376139303265346239 Dec 12 17:21:37.862128 containerd[1697]: time="2025-12-12T17:21:37.862079617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-b56wz,Uid:eb16128d-beb5-452f-bdbb-8bfa8a64a1e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772\"" Dec 12 17:21:37.864262 containerd[1697]: time="2025-12-12T17:21:37.864234903Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:21:37.935000 audit[3091]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:37.935000 audit[3091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca5cacc0 a2=0 a3=1 items=0 ppid=2997 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:21:37.936000 audit[3093]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:37.936000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5bd64d0 a2=0 a3=1 items=0 ppid=2997 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:21:37.937000 audit[3092]: NETFILTER_CFG table=mangle:56 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:37.937000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0032710 a2=0 a3=1 items=0 ppid=2997 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:21:37.938000 audit[3095]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:37.938000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9c23af0 a2=0 a3=1 items=0 ppid=2997 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:21:37.940000 audit[3096]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:37.940000 audit[3096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf0f92d0 a2=0 a3=1 items=0 ppid=2997 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:21:37.941000 audit[3098]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:37.941000 audit[3098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9931920 a2=0 a3=1 items=0 ppid=2997 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:37.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:21:38.038000 audit[3099]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.038000 audit[3099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffec42ffe0 a2=0 a3=1 items=0 ppid=2997 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:21:38.040000 audit[3101]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.040000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffffb3fc30 a2=0 a3=1 items=0 ppid=2997 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:21:38.043000 audit[3104]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.043000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff387ddb0 a2=0 a3=1 items=0 ppid=2997 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:21:38.044000 audit[3105]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.044000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3bf3580 a2=0 a3=1 items=0 ppid=2997 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:21:38.048000 audit[3107]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.048000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe2f188a0 a2=0 a3=1 items=0 ppid=2997 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:21:38.049000 audit[3108]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.049000 audit[3108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed9a8da0 a2=0 a3=1 items=0 ppid=2997 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:21:38.051000 audit[3110]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.051000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc03157c0 a2=0 a3=1 items=0 ppid=2997 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:21:38.056000 audit[3113]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.056000 audit[3113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff4355850 a2=0 a3=1 items=0 ppid=2997 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:21:38.057000 audit[3114]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.057000 audit[3114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2d8fd10 a2=0 a3=1 items=0 ppid=2997 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:21:38.060000 audit[3116]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.060000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7b3e5e0 a2=0 a3=1 items=0 ppid=2997 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:21:38.061000 audit[3117]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.061000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb177aa0 a2=0 a3=1 items=0 ppid=2997 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:21:38.063000 audit[3119]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.063000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7b0f010 a2=0 a3=1 items=0 ppid=2997 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:21:38.067000 audit[3122]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.067000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1483370 a2=0 a3=1 items=0 ppid=2997 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:21:38.071000 audit[3125]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.071000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc46e7490 a2=0 a3=1 items=0 ppid=2997 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:21:38.072000 audit[3126]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.072000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdaa988e0 a2=0 a3=1 items=0 ppid=2997 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:21:38.074000 audit[3128]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.074000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffc7ff310 a2=0 a3=1 items=0 ppid=2997 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:21:38.077000 audit[3131]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.077000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9085630 a2=0 a3=1 items=0 ppid=2997 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:21:38.078000 audit[3132]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.078000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee0297e0 a2=0 a3=1 items=0 ppid=2997 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:21:38.081000 audit[3134]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:38.081000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc02615e0 a2=0 a3=1 items=0 ppid=2997 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:21:38.103000 audit[3140]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:38.103000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff04c2430 a2=0 a3=1 items=0 ppid=2997 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:38.114000 audit[3140]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:38.114000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff04c2430 a2=0 a3=1 items=0 ppid=2997 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:38.116000 audit[3145]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.116000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcba1cb90 a2=0 a3=1 items=0 ppid=2997 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:21:38.118000 audit[3147]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.118000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdd356910 a2=0 a3=1 items=0 ppid=2997 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:21:38.122000 audit[3150]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.122000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd6a32180 a2=0 a3=1 items=0 ppid=2997 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:21:38.123000 audit[3151]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.123000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef6043b0 a2=0 a3=1 items=0 ppid=2997 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:21:38.126000 audit[3153]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.126000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff66e71d0 a2=0 a3=1 items=0 ppid=2997 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:21:38.127000 audit[3154]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.127000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb153cd0 a2=0 a3=1 items=0 ppid=2997 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:21:38.129000 audit[3156]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.129000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc442f930 a2=0 a3=1 items=0 ppid=2997 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:21:38.133000 audit[3159]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.133000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff513c0d0 a2=0 a3=1 items=0 ppid=2997 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:21:38.134000 audit[3160]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.134000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff91a5c30 a2=0 a3=1 items=0 ppid=2997 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:21:38.137000 audit[3162]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.137000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff5b5ebb0 a2=0 a3=1 items=0 ppid=2997 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:21:38.138000 audit[3163]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.138000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd447c5a0 a2=0 a3=1 items=0 ppid=2997 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:21:38.141000 audit[3165]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.141000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb570530 a2=0 a3=1 items=0 ppid=2997 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:21:38.144000 audit[3168]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.144000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffac98a20 a2=0 a3=1 items=0 ppid=2997 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:21:38.148000 audit[3171]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.148000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3470840 a2=0 a3=1 items=0 ppid=2997 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:21:38.149000 audit[3172]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.149000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3431c70 a2=0 a3=1 items=0 ppid=2997 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:21:38.152000 audit[3174]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.152000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff6507140 a2=0 a3=1 items=0 ppid=2997 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:21:38.155000 audit[3177]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.155000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffd6a1b90 a2=0 a3=1 items=0 ppid=2997 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:21:38.156000 audit[3178]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.156000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe68cfa70 a2=0 a3=1 items=0 ppid=2997 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:21:38.158000 audit[3180]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.158000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffce3ccc0 a2=0 a3=1 items=0 ppid=2997 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:21:38.159000 audit[3181]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.159000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0ffc180 a2=0 a3=1 items=0 ppid=2997 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:21:38.162000 audit[3183]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.162000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff98e0310 a2=0 a3=1 items=0 ppid=2997 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:21:38.165000 audit[3186]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:38.165000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff0731320 a2=0 a3=1 items=0 ppid=2997 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:21:38.168000 audit[3188]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:21:38.168000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc41cd870 a2=0 a3=1 items=0 ppid=2997 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.168000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:38.169000 audit[3188]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:21:38.169000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc41cd870 a2=0 a3=1 items=0 ppid=2997 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:38.169000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:38.551123 kubelet[2889]: I1212 17:21:38.551037 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cnkf4" podStartSLOduration=1.551019288 podStartE2EDuration="1.551019288s" podCreationTimestamp="2025-12-12 17:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:21:38.550938408 +0000 UTC m=+7.273986269" watchObservedRunningTime="2025-12-12 17:21:38.551019288 +0000 UTC m=+7.274067149" Dec 12 17:21:40.161547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366791841.mount: Deactivated successfully. Dec 12 17:21:43.304127 containerd[1697]: time="2025-12-12T17:21:43.304051435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:43.305748 containerd[1697]: time="2025-12-12T17:21:43.305430038Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:21:43.306831 containerd[1697]: time="2025-12-12T17:21:43.306792362Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:43.309531 containerd[1697]: time="2025-12-12T17:21:43.309483809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:43.310469 containerd[1697]: time="2025-12-12T17:21:43.310437491Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.446166348s" Dec 12 17:21:43.310524 containerd[1697]: time="2025-12-12T17:21:43.310471411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:21:43.313537 containerd[1697]: time="2025-12-12T17:21:43.313506459Z" level=info msg="CreateContainer within sandbox \"28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:21:43.324326 containerd[1697]: time="2025-12-12T17:21:43.324251607Z" level=info msg="Container ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:43.324807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755843831.mount: Deactivated successfully. Dec 12 17:21:43.331175 containerd[1697]: time="2025-12-12T17:21:43.331120265Z" level=info msg="CreateContainer within sandbox \"28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40\"" Dec 12 17:21:43.331705 containerd[1697]: time="2025-12-12T17:21:43.331676306Z" level=info msg="StartContainer for \"ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40\"" Dec 12 17:21:43.332749 containerd[1697]: time="2025-12-12T17:21:43.332699989Z" level=info msg="connecting to shim ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40" address="unix:///run/containerd/s/183e350dc661b62f3d733e7feeae2cc4e06a4d2fe91f83f9b0a392c54089e2c9" protocol=ttrpc version=3 Dec 12 17:21:43.355792 systemd[1]: Started cri-containerd-ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40.scope - libcontainer container ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40. Dec 12 17:21:43.367000 audit: BPF prog-id=146 op=LOAD Dec 12 17:21:43.370276 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:21:43.370330 kernel: audit: type=1334 audit(1765560103.367:510): prog-id=146 op=LOAD Dec 12 17:21:43.370000 audit: BPF prog-id=147 op=LOAD Dec 12 17:21:43.370000 audit[3200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.376489 kernel: audit: type=1334 audit(1765560103.370:511): prog-id=147 op=LOAD Dec 12 17:21:43.376535 kernel: audit: type=1300 audit(1765560103.370:511): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.380717 kernel: audit: type=1327 audit(1765560103.370:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.370000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:21:43.382197 kernel: audit: type=1334 audit(1765560103.370:512): prog-id=147 op=UNLOAD Dec 12 17:21:43.370000 audit[3200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.385842 kernel: audit: type=1300 audit(1765560103.370:512): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.389804 kernel: audit: type=1327 audit(1765560103.370:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.370000 audit: BPF prog-id=148 op=LOAD Dec 12 17:21:43.390734 kernel: audit: type=1334 audit(1765560103.370:513): prog-id=148 op=LOAD Dec 12 17:21:43.370000 audit[3200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.394482 kernel: audit: type=1300 audit(1765560103.370:513): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.398130 kernel: audit: type=1327 audit(1765560103.370:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.372000 audit: BPF prog-id=149 op=LOAD Dec 12 17:21:43.372000 audit[3200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.375000 audit: BPF prog-id=149 op=UNLOAD Dec 12 17:21:43.375000 audit[3200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.375000 audit: BPF prog-id=148 op=UNLOAD Dec 12 17:21:43.375000 audit[3200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.375000 audit: BPF prog-id=150 op=LOAD Dec 12 17:21:43.375000 audit[3200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3021 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:43.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666383363313532363736363238373163653837346133393161386164 Dec 12 17:21:43.405178 containerd[1697]: time="2025-12-12T17:21:43.405141937Z" level=info msg="StartContainer for \"ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40\" returns successfully" Dec 12 17:21:44.014103 kubelet[2889]: I1212 17:21:44.014021 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-b56wz" podStartSLOduration=1.565948245 podStartE2EDuration="7.013851878s" podCreationTimestamp="2025-12-12 17:21:37 +0000 UTC" firstStartedPulling="2025-12-12 17:21:37.863614781 +0000 UTC m=+6.586662642" lastFinishedPulling="2025-12-12 17:21:43.311518414 +0000 UTC m=+12.034566275" observedRunningTime="2025-12-12 17:21:43.559466978 +0000 UTC m=+12.282514839" watchObservedRunningTime="2025-12-12 17:21:44.013851878 +0000 UTC m=+12.736899739" Dec 12 17:21:49.073934 sudo[1934]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:49.072000 audit[1934]: USER_END pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.079288 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 17:21:49.079388 kernel: audit: type=1106 audit(1765560109.072:518): pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.079449 kernel: audit: type=1104 audit(1765560109.072:519): pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.072000 audit[1934]: CRED_DISP pid=1934 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.233376 sshd[1933]: Connection closed by 139.178.89.65 port 54084 Dec 12 17:21:49.232589 sshd-session[1930]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:49.232000 audit[1930]: USER_END pid=1930 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:49.239887 systemd[1]: sshd@6-10.0.6.252:22-139.178.89.65:54084.service: Deactivated successfully. Dec 12 17:21:49.232000 audit[1930]: CRED_DISP pid=1930 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:49.243170 kernel: audit: type=1106 audit(1765560109.232:520): pid=1930 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:49.243254 kernel: audit: type=1104 audit(1765560109.232:521): pid=1930 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:21:49.244134 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:21:49.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.6.252:22-139.178.89.65:54084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.244359 systemd[1]: session-7.scope: Consumed 6.712s CPU time, 228.9M memory peak. Dec 12 17:21:49.247420 kernel: audit: type=1131 audit(1765560109.238:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.6.252:22-139.178.89.65:54084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.246850 systemd-logind[1663]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:21:49.248731 systemd-logind[1663]: Removed session 7. Dec 12 17:21:49.568000 audit[3292]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.568000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa865c50 a2=0 a3=1 items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.576790 kernel: audit: type=1325 audit(1765560109.568:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.576881 kernel: audit: type=1300 audit(1765560109.568:523): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa865c50 a2=0 a3=1 items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:49.579321 kernel: audit: type=1327 audit(1765560109.568:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:49.579457 kernel: audit: type=1325 audit(1765560109.576:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.576000 audit[3292]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.576000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa865c50 a2=0 a3=1 items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.585077 kernel: audit: type=1300 audit(1765560109.576:524): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa865c50 a2=0 a3=1 items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:49.591000 audit[3294]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.591000 audit[3294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffcd79e00 a2=0 a3=1 items=0 ppid=2997 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:49.596000 audit[3294]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:49.596000 audit[3294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcd79e00 a2=0 a3=1 items=0 ppid=2997 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:49.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:52.672000 audit[3297]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:52.672000 audit[3297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffce9f80c0 a2=0 a3=1 items=0 ppid=2997 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:52.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:52.677000 audit[3297]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:52.677000 audit[3297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce9f80c0 a2=0 a3=1 items=0 ppid=2997 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:52.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:52.716000 audit[3299]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:52.716000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff2e2f280 a2=0 a3=1 items=0 ppid=2997 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:52.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:52.727000 audit[3299]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:52.727000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff2e2f280 a2=0 a3=1 items=0 ppid=2997 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:52.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:53.736000 audit[3301]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:53.736000 audit[3301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca5e8770 a2=0 a3=1 items=0 ppid=2997 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:53.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:53.743000 audit[3301]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:53.743000 audit[3301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca5e8770 a2=0 a3=1 items=0 ppid=2997 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:53.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:54.691000 audit[3303]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:54.695413 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 17:21:54.695574 kernel: audit: type=1325 audit(1765560114.691:533): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:54.691000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff1856b50 a2=0 a3=1 items=0 ppid=2997 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:54.700065 kernel: audit: type=1300 audit(1765560114.691:533): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff1856b50 a2=0 a3=1 items=0 ppid=2997 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:54.700143 kernel: audit: type=1327 audit(1765560114.691:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:54.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:54.703000 audit[3303]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:54.703000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff1856b50 a2=0 a3=1 items=0 ppid=2997 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:54.711329 kernel: audit: type=1325 audit(1765560114.703:534): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:54.711442 kernel: audit: type=1300 audit(1765560114.703:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff1856b50 a2=0 a3=1 items=0 ppid=2997 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:54.711469 kernel: audit: type=1327 audit(1765560114.703:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:54.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:54.722204 systemd[1]: Created slice kubepods-besteffort-pod783334a7_0151_4a42_84cc_dbd006a7c596.slice - libcontainer container kubepods-besteffort-pod783334a7_0151_4a42_84cc_dbd006a7c596.slice. Dec 12 17:21:54.745701 kubelet[2889]: I1212 17:21:54.745655 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2ng\" (UniqueName: \"kubernetes.io/projected/783334a7-0151-4a42-84cc-dbd006a7c596-kube-api-access-xg2ng\") pod \"calico-typha-5845b58bf8-npwk9\" (UID: \"783334a7-0151-4a42-84cc-dbd006a7c596\") " pod="calico-system/calico-typha-5845b58bf8-npwk9" Dec 12 17:21:54.745701 kubelet[2889]: I1212 17:21:54.745703 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/783334a7-0151-4a42-84cc-dbd006a7c596-tigera-ca-bundle\") pod \"calico-typha-5845b58bf8-npwk9\" (UID: \"783334a7-0151-4a42-84cc-dbd006a7c596\") " pod="calico-system/calico-typha-5845b58bf8-npwk9" Dec 12 17:21:54.746079 kubelet[2889]: I1212 17:21:54.745730 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/783334a7-0151-4a42-84cc-dbd006a7c596-typha-certs\") pod \"calico-typha-5845b58bf8-npwk9\" (UID: \"783334a7-0151-4a42-84cc-dbd006a7c596\") " pod="calico-system/calico-typha-5845b58bf8-npwk9" Dec 12 17:21:54.907736 systemd[1]: Created slice kubepods-besteffort-pod6e080cae_683b_44cb_aa02_c5e3e43ade1e.slice - libcontainer container kubepods-besteffort-pod6e080cae_683b_44cb_aa02_c5e3e43ade1e.slice. Dec 12 17:21:54.947774 kubelet[2889]: I1212 17:21:54.947652 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-cni-net-dir\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947774 kubelet[2889]: I1212 17:21:54.947701 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-flexvol-driver-host\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947774 kubelet[2889]: I1212 17:21:54.947720 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-var-run-calico\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947774 kubelet[2889]: I1212 17:21:54.947736 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4q5\" (UniqueName: \"kubernetes.io/projected/6e080cae-683b-44cb-aa02-c5e3e43ade1e-kube-api-access-mv4q5\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947774 kubelet[2889]: I1212 17:21:54.947754 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-cni-log-dir\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947977 kubelet[2889]: I1212 17:21:54.947768 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-var-lib-calico\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947977 kubelet[2889]: I1212 17:21:54.947783 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e080cae-683b-44cb-aa02-c5e3e43ade1e-tigera-ca-bundle\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947977 kubelet[2889]: I1212 17:21:54.947803 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-lib-modules\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947977 kubelet[2889]: I1212 17:21:54.947843 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e080cae-683b-44cb-aa02-c5e3e43ade1e-node-certs\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.947977 kubelet[2889]: I1212 17:21:54.947859 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-cni-bin-dir\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.948076 kubelet[2889]: I1212 17:21:54.947873 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-policysync\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:54.948076 kubelet[2889]: I1212 17:21:54.947886 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e080cae-683b-44cb-aa02-c5e3e43ade1e-xtables-lock\") pod \"calico-node-n5wv7\" (UID: \"6e080cae-683b-44cb-aa02-c5e3e43ade1e\") " pod="calico-system/calico-node-n5wv7" Dec 12 17:21:55.026674 containerd[1697]: time="2025-12-12T17:21:55.026630809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5845b58bf8-npwk9,Uid:783334a7-0151-4a42-84cc-dbd006a7c596,Namespace:calico-system,Attempt:0,}" Dec 12 17:21:55.043922 containerd[1697]: time="2025-12-12T17:21:55.043864614Z" level=info msg="connecting to shim 4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38" address="unix:///run/containerd/s/750d193c886c238fd61abb682e190d9d910976161463b729f08a0f50a7ab3524" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:55.049417 kubelet[2889]: E1212 17:21:55.049373 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.049580 kubelet[2889]: W1212 17:21:55.049395 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.049580 kubelet[2889]: E1212 17:21:55.049553 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.053197 kubelet[2889]: E1212 17:21:55.053172 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.053355 kubelet[2889]: W1212 17:21:55.053290 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.053355 kubelet[2889]: E1212 17:21:55.053313 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.062765 kubelet[2889]: E1212 17:21:55.062675 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.062765 kubelet[2889]: W1212 17:21:55.062749 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.062906 kubelet[2889]: E1212 17:21:55.062789 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.077746 systemd[1]: Started cri-containerd-4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38.scope - libcontainer container 4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38. Dec 12 17:21:55.093953 kubelet[2889]: E1212 17:21:55.093367 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:21:55.103000 audit: BPF prog-id=151 op=LOAD Dec 12 17:21:55.106422 kernel: audit: type=1334 audit(1765560115.103:535): prog-id=151 op=LOAD Dec 12 17:21:55.105000 audit: BPF prog-id=152 op=LOAD Dec 12 17:21:55.105000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.112073 kernel: audit: type=1334 audit(1765560115.105:536): prog-id=152 op=LOAD Dec 12 17:21:55.112140 kernel: audit: type=1300 audit(1765560115.105:536): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.116040 kernel: audit: type=1327 audit(1765560115.105:536): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.106000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:21:55.106000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.106000 audit: BPF prog-id=153 op=LOAD Dec 12 17:21:55.106000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.107000 audit: BPF prog-id=154 op=LOAD Dec 12 17:21:55.107000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.107000 audit: BPF prog-id=154 op=UNLOAD Dec 12 17:21:55.107000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.107000 audit: BPF prog-id=153 op=UNLOAD Dec 12 17:21:55.107000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.107000 audit: BPF prog-id=155 op=LOAD Dec 12 17:21:55.107000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3314 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465356438643333663231623261643931393639623536316532646634 Dec 12 17:21:55.135417 kubelet[2889]: E1212 17:21:55.135355 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.135417 kubelet[2889]: W1212 17:21:55.135409 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.135531 kubelet[2889]: E1212 17:21:55.135435 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.135708 kubelet[2889]: E1212 17:21:55.135693 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.135747 kubelet[2889]: W1212 17:21:55.135707 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.135747 kubelet[2889]: E1212 17:21:55.135744 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.136027 kubelet[2889]: E1212 17:21:55.136009 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.136077 kubelet[2889]: W1212 17:21:55.136023 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.136106 kubelet[2889]: E1212 17:21:55.136084 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.136262 kubelet[2889]: E1212 17:21:55.136243 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.136299 kubelet[2889]: W1212 17:21:55.136275 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.136299 kubelet[2889]: E1212 17:21:55.136285 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.136527 kubelet[2889]: E1212 17:21:55.136513 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.136527 kubelet[2889]: W1212 17:21:55.136525 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.136621 kubelet[2889]: E1212 17:21:55.136535 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.136776 kubelet[2889]: E1212 17:21:55.136762 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.136776 kubelet[2889]: W1212 17:21:55.136774 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.136846 kubelet[2889]: E1212 17:21:55.136784 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.137105 kubelet[2889]: E1212 17:21:55.137067 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.137105 kubelet[2889]: W1212 17:21:55.137082 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.137166 kubelet[2889]: E1212 17:21:55.137111 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.137522 kubelet[2889]: E1212 17:21:55.137283 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.137522 kubelet[2889]: W1212 17:21:55.137305 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.137522 kubelet[2889]: E1212 17:21:55.137316 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.137522 kubelet[2889]: E1212 17:21:55.137531 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.137644 kubelet[2889]: W1212 17:21:55.137540 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.137644 kubelet[2889]: E1212 17:21:55.137550 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.138041 kubelet[2889]: E1212 17:21:55.138003 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.138041 kubelet[2889]: W1212 17:21:55.138035 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.138166 kubelet[2889]: E1212 17:21:55.138047 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.138352 kubelet[2889]: E1212 17:21:55.138334 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.138352 kubelet[2889]: W1212 17:21:55.138350 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.138436 kubelet[2889]: E1212 17:21:55.138381 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.138712 kubelet[2889]: E1212 17:21:55.138602 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.138712 kubelet[2889]: W1212 17:21:55.138616 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.138712 kubelet[2889]: E1212 17:21:55.138628 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.138827 kubelet[2889]: E1212 17:21:55.138809 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.138827 kubelet[2889]: W1212 17:21:55.138819 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.138827 kubelet[2889]: E1212 17:21:55.138827 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139162 kubelet[2889]: E1212 17:21:55.138965 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139162 kubelet[2889]: W1212 17:21:55.138974 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139162 kubelet[2889]: E1212 17:21:55.138982 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139162 kubelet[2889]: E1212 17:21:55.139088 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139162 kubelet[2889]: W1212 17:21:55.139095 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139162 kubelet[2889]: E1212 17:21:55.139102 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139313 kubelet[2889]: E1212 17:21:55.139217 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139313 kubelet[2889]: W1212 17:21:55.139224 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139313 kubelet[2889]: E1212 17:21:55.139231 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139356 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139741 kubelet[2889]: W1212 17:21:55.139363 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139370 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139494 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139741 kubelet[2889]: W1212 17:21:55.139500 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139507 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139617 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.139741 kubelet[2889]: W1212 17:21:55.139623 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.139741 kubelet[2889]: E1212 17:21:55.139631 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.140051 kubelet[2889]: E1212 17:21:55.139795 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.140051 kubelet[2889]: W1212 17:21:55.139803 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.140051 kubelet[2889]: E1212 17:21:55.139821 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.140800 containerd[1697]: time="2025-12-12T17:21:55.140766667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5845b58bf8-npwk9,Uid:783334a7-0151-4a42-84cc-dbd006a7c596,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38\"" Dec 12 17:21:55.143360 containerd[1697]: time="2025-12-12T17:21:55.143329993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:21:55.149035 kubelet[2889]: E1212 17:21:55.149010 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.149035 kubelet[2889]: W1212 17:21:55.149032 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.149165 kubelet[2889]: E1212 17:21:55.149050 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.149165 kubelet[2889]: I1212 17:21:55.149073 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fd99a4f-5151-4d6d-a968-dc993caff3f6-kubelet-dir\") pod \"csi-node-driver-79stw\" (UID: \"3fd99a4f-5151-4d6d-a968-dc993caff3f6\") " pod="calico-system/csi-node-driver-79stw" Dec 12 17:21:55.149266 kubelet[2889]: E1212 17:21:55.149197 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.149266 kubelet[2889]: W1212 17:21:55.149205 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.149266 kubelet[2889]: E1212 17:21:55.149220 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.149266 kubelet[2889]: I1212 17:21:55.149234 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3fd99a4f-5151-4d6d-a968-dc993caff3f6-registration-dir\") pod \"csi-node-driver-79stw\" (UID: \"3fd99a4f-5151-4d6d-a968-dc993caff3f6\") " pod="calico-system/csi-node-driver-79stw" Dec 12 17:21:55.149579 kubelet[2889]: E1212 17:21:55.149367 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.149579 kubelet[2889]: W1212 17:21:55.149375 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.149579 kubelet[2889]: E1212 17:21:55.149388 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.149579 kubelet[2889]: I1212 17:21:55.149415 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3fd99a4f-5151-4d6d-a968-dc993caff3f6-socket-dir\") pod \"csi-node-driver-79stw\" (UID: \"3fd99a4f-5151-4d6d-a968-dc993caff3f6\") " pod="calico-system/csi-node-driver-79stw" Dec 12 17:21:55.149734 kubelet[2889]: E1212 17:21:55.149720 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.149785 kubelet[2889]: W1212 17:21:55.149774 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.149846 kubelet[2889]: E1212 17:21:55.149836 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150047 kubelet[2889]: E1212 17:21:55.150034 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.150111 kubelet[2889]: W1212 17:21:55.150099 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.150174 kubelet[2889]: E1212 17:21:55.150163 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150379 kubelet[2889]: E1212 17:21:55.150364 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.150464 kubelet[2889]: W1212 17:21:55.150380 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.150464 kubelet[2889]: E1212 17:21:55.150395 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150547 kubelet[2889]: E1212 17:21:55.150535 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.150547 kubelet[2889]: W1212 17:21:55.150543 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.150644 kubelet[2889]: E1212 17:21:55.150572 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150691 kubelet[2889]: E1212 17:21:55.150676 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.150691 kubelet[2889]: W1212 17:21:55.150689 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.150816 kubelet[2889]: E1212 17:21:55.150717 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150845 kubelet[2889]: E1212 17:21:55.150816 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.150845 kubelet[2889]: W1212 17:21:55.150824 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.150845 kubelet[2889]: E1212 17:21:55.150832 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.150945 kubelet[2889]: I1212 17:21:55.150807 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3fd99a4f-5151-4d6d-a968-dc993caff3f6-varrun\") pod \"csi-node-driver-79stw\" (UID: \"3fd99a4f-5151-4d6d-a968-dc993caff3f6\") " pod="calico-system/csi-node-driver-79stw" Dec 12 17:21:55.150945 kubelet[2889]: E1212 17:21:55.150942 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.151056 kubelet[2889]: W1212 17:21:55.150949 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.151056 kubelet[2889]: E1212 17:21:55.150956 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.151304 kubelet[2889]: E1212 17:21:55.151229 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.151304 kubelet[2889]: W1212 17:21:55.151244 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.151304 kubelet[2889]: E1212 17:21:55.151263 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.151639 kubelet[2889]: E1212 17:21:55.151559 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.151639 kubelet[2889]: W1212 17:21:55.151574 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.151639 kubelet[2889]: E1212 17:21:55.151593 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.151893 kubelet[2889]: E1212 17:21:55.151879 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.151967 kubelet[2889]: W1212 17:21:55.151955 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.152029 kubelet[2889]: E1212 17:21:55.152018 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.152107 kubelet[2889]: I1212 17:21:55.152094 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrczb\" (UniqueName: \"kubernetes.io/projected/3fd99a4f-5151-4d6d-a968-dc993caff3f6-kube-api-access-wrczb\") pod \"csi-node-driver-79stw\" (UID: \"3fd99a4f-5151-4d6d-a968-dc993caff3f6\") " pod="calico-system/csi-node-driver-79stw" Dec 12 17:21:55.152343 kubelet[2889]: E1212 17:21:55.152323 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.152374 kubelet[2889]: W1212 17:21:55.152344 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.152374 kubelet[2889]: E1212 17:21:55.152359 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.152516 kubelet[2889]: E1212 17:21:55.152504 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.152516 kubelet[2889]: W1212 17:21:55.152515 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.152576 kubelet[2889]: E1212 17:21:55.152525 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.210803 containerd[1697]: time="2025-12-12T17:21:55.210684769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5wv7,Uid:6e080cae-683b-44cb-aa02-c5e3e43ade1e,Namespace:calico-system,Attempt:0,}" Dec 12 17:21:55.237949 containerd[1697]: time="2025-12-12T17:21:55.237903119Z" level=info msg="connecting to shim 1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf" address="unix:///run/containerd/s/2a6094ef8c350b2acd322bc5d33dfa7a3baac08828998d74224f06800c3a291c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:21:55.252754 kubelet[2889]: E1212 17:21:55.252604 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.252754 kubelet[2889]: W1212 17:21:55.252627 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.252754 kubelet[2889]: E1212 17:21:55.252645 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.252992 kubelet[2889]: E1212 17:21:55.252978 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.253048 kubelet[2889]: W1212 17:21:55.253037 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.253106 kubelet[2889]: E1212 17:21:55.253094 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.253351 kubelet[2889]: E1212 17:21:55.253335 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.253393 kubelet[2889]: W1212 17:21:55.253350 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.253393 kubelet[2889]: E1212 17:21:55.253375 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.253592 kubelet[2889]: E1212 17:21:55.253582 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.253592 kubelet[2889]: W1212 17:21:55.253593 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.253663 kubelet[2889]: E1212 17:21:55.253610 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.253770 kubelet[2889]: E1212 17:21:55.253759 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.253800 kubelet[2889]: W1212 17:21:55.253770 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.253800 kubelet[2889]: E1212 17:21:55.253790 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.254262 kubelet[2889]: E1212 17:21:55.254245 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.254262 kubelet[2889]: W1212 17:21:55.254261 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.254326 kubelet[2889]: E1212 17:21:55.254286 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.254557 kubelet[2889]: E1212 17:21:55.254544 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.254595 kubelet[2889]: W1212 17:21:55.254558 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.254654 kubelet[2889]: E1212 17:21:55.254638 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.254810 kubelet[2889]: E1212 17:21:55.254791 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.254810 kubelet[2889]: W1212 17:21:55.254806 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.254880 kubelet[2889]: E1212 17:21:55.254835 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.255003 kubelet[2889]: E1212 17:21:55.254988 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.255003 kubelet[2889]: W1212 17:21:55.255001 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.255171 kubelet[2889]: E1212 17:21:55.255015 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.255264 kubelet[2889]: E1212 17:21:55.255248 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.255334 kubelet[2889]: W1212 17:21:55.255311 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.255414 kubelet[2889]: E1212 17:21:55.255394 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.255601 kubelet[2889]: E1212 17:21:55.255586 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.255601 kubelet[2889]: W1212 17:21:55.255600 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.255666 kubelet[2889]: E1212 17:21:55.255614 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.255846 kubelet[2889]: E1212 17:21:55.255831 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.255877 kubelet[2889]: W1212 17:21:55.255847 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.255877 kubelet[2889]: E1212 17:21:55.255870 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256040 kubelet[2889]: E1212 17:21:55.256029 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256068 kubelet[2889]: W1212 17:21:55.256041 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256095 kubelet[2889]: E1212 17:21:55.256070 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256197 kubelet[2889]: E1212 17:21:55.256187 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256226 kubelet[2889]: W1212 17:21:55.256197 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256226 kubelet[2889]: E1212 17:21:55.256219 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256346 kubelet[2889]: E1212 17:21:55.256335 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256346 kubelet[2889]: W1212 17:21:55.256346 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256427 kubelet[2889]: E1212 17:21:55.256367 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256532 kubelet[2889]: E1212 17:21:55.256519 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256567 kubelet[2889]: W1212 17:21:55.256532 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256567 kubelet[2889]: E1212 17:21:55.256551 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256707 kubelet[2889]: E1212 17:21:55.256696 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256736 kubelet[2889]: W1212 17:21:55.256707 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256736 kubelet[2889]: E1212 17:21:55.256720 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.256926 kubelet[2889]: E1212 17:21:55.256914 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.256926 kubelet[2889]: W1212 17:21:55.256925 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.256982 kubelet[2889]: E1212 17:21:55.256941 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.257155 kubelet[2889]: E1212 17:21:55.257141 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.257186 kubelet[2889]: W1212 17:21:55.257157 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.257186 kubelet[2889]: E1212 17:21:55.257172 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.257736 kubelet[2889]: E1212 17:21:55.257715 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.257784 kubelet[2889]: W1212 17:21:55.257736 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.257784 kubelet[2889]: E1212 17:21:55.257767 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.258002 kubelet[2889]: E1212 17:21:55.257989 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.258002 kubelet[2889]: W1212 17:21:55.258001 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.258064 kubelet[2889]: E1212 17:21:55.258023 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.258196 kubelet[2889]: E1212 17:21:55.258185 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.258196 kubelet[2889]: W1212 17:21:55.258196 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.258337 kubelet[2889]: E1212 17:21:55.258318 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.258449 kubelet[2889]: E1212 17:21:55.258355 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.258509 kubelet[2889]: W1212 17:21:55.258497 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.258584 kubelet[2889]: E1212 17:21:55.258571 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.258770 systemd[1]: Started cri-containerd-1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf.scope - libcontainer container 1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf. Dec 12 17:21:55.259671 kubelet[2889]: E1212 17:21:55.259646 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.259711 kubelet[2889]: W1212 17:21:55.259671 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.259711 kubelet[2889]: E1212 17:21:55.259691 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.260190 kubelet[2889]: E1212 17:21:55.260171 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.260284 kubelet[2889]: W1212 17:21:55.260190 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.260284 kubelet[2889]: E1212 17:21:55.260206 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.271242 kubelet[2889]: E1212 17:21:55.271213 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:55.271242 kubelet[2889]: W1212 17:21:55.271231 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:55.271453 kubelet[2889]: E1212 17:21:55.271251 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:55.275000 audit: BPF prog-id=156 op=LOAD Dec 12 17:21:55.275000 audit: BPF prog-id=157 op=LOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=158 op=LOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=159 op=LOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=158 op=UNLOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.275000 audit: BPF prog-id=160 op=LOAD Dec 12 17:21:55.275000 audit[3420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3410 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386139653332366461376461663836303662303463326666646639 Dec 12 17:21:55.291510 containerd[1697]: time="2025-12-12T17:21:55.291469379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5wv7,Uid:6e080cae-683b-44cb-aa02-c5e3e43ade1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\"" Dec 12 17:21:55.729000 audit[3475]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:55.729000 audit[3475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffccee5a90 a2=0 a3=1 items=0 ppid=2997 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:55.737000 audit[3475]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:21:55.737000 audit[3475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccee5a90 a2=0 a3=1 items=0 ppid=2997 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:55.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:21:56.505555 kubelet[2889]: E1212 17:21:56.505509 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:21:56.581535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924517619.mount: Deactivated successfully. Dec 12 17:21:57.436444 containerd[1697]: time="2025-12-12T17:21:57.436364832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:57.437834 containerd[1697]: time="2025-12-12T17:21:57.437783756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:21:57.439132 containerd[1697]: time="2025-12-12T17:21:57.439106199Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:57.441209 containerd[1697]: time="2025-12-12T17:21:57.441182965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:57.442336 containerd[1697]: time="2025-12-12T17:21:57.442211768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.298844214s" Dec 12 17:21:57.442336 containerd[1697]: time="2025-12-12T17:21:57.442247408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:21:57.442996 containerd[1697]: time="2025-12-12T17:21:57.442978770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:21:57.451762 containerd[1697]: time="2025-12-12T17:21:57.451715672Z" level=info msg="CreateContainer within sandbox \"4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:21:57.465345 containerd[1697]: time="2025-12-12T17:21:57.463887464Z" level=info msg="Container 950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:57.472420 containerd[1697]: time="2025-12-12T17:21:57.472334326Z" level=info msg="CreateContainer within sandbox \"4e5d8d33f21b2ad91969b561e2df4f2537c4c0cf1dec401a82c8cea83353ed38\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a\"" Dec 12 17:21:57.472805 containerd[1697]: time="2025-12-12T17:21:57.472764167Z" level=info msg="StartContainer for \"950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a\"" Dec 12 17:21:57.473787 containerd[1697]: time="2025-12-12T17:21:57.473763249Z" level=info msg="connecting to shim 950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a" address="unix:///run/containerd/s/750d193c886c238fd61abb682e190d9d910976161463b729f08a0f50a7ab3524" protocol=ttrpc version=3 Dec 12 17:21:57.498857 systemd[1]: Started cri-containerd-950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a.scope - libcontainer container 950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a. Dec 12 17:21:57.509000 audit: BPF prog-id=161 op=LOAD Dec 12 17:21:57.510000 audit: BPF prog-id=162 op=LOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=163 op=LOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=164 op=LOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=163 op=UNLOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.510000 audit: BPF prog-id=165 op=LOAD Dec 12 17:21:57.510000 audit[3487]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3314 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:57.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935306662366438323461393464333062393465383663343164343437 Dec 12 17:21:57.535392 containerd[1697]: time="2025-12-12T17:21:57.535327129Z" level=info msg="StartContainer for \"950fb6d824a94d30b94e86c41d447b0cabd013ec548d9f307c64e4fb3e57eb4a\" returns successfully" Dec 12 17:21:57.591387 kubelet[2889]: I1212 17:21:57.591326 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5845b58bf8-npwk9" podStartSLOduration=1.290305015 podStartE2EDuration="3.591308475s" podCreationTimestamp="2025-12-12 17:21:54 +0000 UTC" firstStartedPulling="2025-12-12 17:21:55.141803389 +0000 UTC m=+23.864851210" lastFinishedPulling="2025-12-12 17:21:57.442806849 +0000 UTC m=+26.165854670" observedRunningTime="2025-12-12 17:21:57.590371592 +0000 UTC m=+26.313419413" watchObservedRunningTime="2025-12-12 17:21:57.591308475 +0000 UTC m=+26.314356336" Dec 12 17:21:57.656809 kubelet[2889]: E1212 17:21:57.656777 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.656809 kubelet[2889]: W1212 17:21:57.656802 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.656957 kubelet[2889]: E1212 17:21:57.656824 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.656989 kubelet[2889]: E1212 17:21:57.656971 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.656989 kubelet[2889]: W1212 17:21:57.656979 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.657026 kubelet[2889]: E1212 17:21:57.656988 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.657154 kubelet[2889]: E1212 17:21:57.657141 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.657154 kubelet[2889]: W1212 17:21:57.657152 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.657269 kubelet[2889]: E1212 17:21:57.657160 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.657353 kubelet[2889]: E1212 17:21:57.657338 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.657353 kubelet[2889]: W1212 17:21:57.657347 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.657353 kubelet[2889]: E1212 17:21:57.657354 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.657540 kubelet[2889]: E1212 17:21:57.657508 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.657540 kubelet[2889]: W1212 17:21:57.657516 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.657540 kubelet[2889]: E1212 17:21:57.657526 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.658565 kubelet[2889]: E1212 17:21:57.658538 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.658565 kubelet[2889]: W1212 17:21:57.658557 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.658652 kubelet[2889]: E1212 17:21:57.658572 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.659133 kubelet[2889]: E1212 17:21:57.659111 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.659133 kubelet[2889]: W1212 17:21:57.659128 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.659225 kubelet[2889]: E1212 17:21:57.659140 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.659659 kubelet[2889]: E1212 17:21:57.659641 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.659697 kubelet[2889]: W1212 17:21:57.659681 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.659697 kubelet[2889]: E1212 17:21:57.659694 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.660793 kubelet[2889]: E1212 17:21:57.660774 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.660793 kubelet[2889]: W1212 17:21:57.660791 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.660875 kubelet[2889]: E1212 17:21:57.660803 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.660969 kubelet[2889]: E1212 17:21:57.660953 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.660969 kubelet[2889]: W1212 17:21:57.660968 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.661083 kubelet[2889]: E1212 17:21:57.660977 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.661204 kubelet[2889]: E1212 17:21:57.661191 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.661204 kubelet[2889]: W1212 17:21:57.661201 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.661289 kubelet[2889]: E1212 17:21:57.661209 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.661575 kubelet[2889]: E1212 17:21:57.661556 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.661617 kubelet[2889]: W1212 17:21:57.661577 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.661617 kubelet[2889]: E1212 17:21:57.661590 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.662231 kubelet[2889]: E1212 17:21:57.662184 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.662231 kubelet[2889]: W1212 17:21:57.662205 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.662231 kubelet[2889]: E1212 17:21:57.662218 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.662529 kubelet[2889]: E1212 17:21:57.662513 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.662529 kubelet[2889]: W1212 17:21:57.662529 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.662598 kubelet[2889]: E1212 17:21:57.662539 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.662719 kubelet[2889]: E1212 17:21:57.662706 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.662719 kubelet[2889]: W1212 17:21:57.662717 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.662783 kubelet[2889]: E1212 17:21:57.662725 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.672362 kubelet[2889]: E1212 17:21:57.672326 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.672362 kubelet[2889]: W1212 17:21:57.672354 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.672530 kubelet[2889]: E1212 17:21:57.672421 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.674018 kubelet[2889]: E1212 17:21:57.673834 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.674018 kubelet[2889]: W1212 17:21:57.673861 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.674018 kubelet[2889]: E1212 17:21:57.673879 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.674265 kubelet[2889]: E1212 17:21:57.674248 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.674298 kubelet[2889]: W1212 17:21:57.674265 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.674298 kubelet[2889]: E1212 17:21:57.674279 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.674485 kubelet[2889]: E1212 17:21:57.674471 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.674485 kubelet[2889]: W1212 17:21:57.674484 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.674557 kubelet[2889]: E1212 17:21:57.674498 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.674659 kubelet[2889]: E1212 17:21:57.674645 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.674659 kubelet[2889]: W1212 17:21:57.674656 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.674705 kubelet[2889]: E1212 17:21:57.674667 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.674953 kubelet[2889]: E1212 17:21:57.674937 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.674953 kubelet[2889]: W1212 17:21:57.674952 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.675020 kubelet[2889]: E1212 17:21:57.674968 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.675343 kubelet[2889]: E1212 17:21:57.675229 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.675343 kubelet[2889]: W1212 17:21:57.675259 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.675343 kubelet[2889]: E1212 17:21:57.675280 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.675484 kubelet[2889]: E1212 17:21:57.675461 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.675484 kubelet[2889]: W1212 17:21:57.675479 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.675484 kubelet[2889]: E1212 17:21:57.675497 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.675661 kubelet[2889]: E1212 17:21:57.675648 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.675661 kubelet[2889]: W1212 17:21:57.675659 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.675805 kubelet[2889]: E1212 17:21:57.675673 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.675938 kubelet[2889]: E1212 17:21:57.675926 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.675938 kubelet[2889]: W1212 17:21:57.675938 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.676002 kubelet[2889]: E1212 17:21:57.675952 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.676123 kubelet[2889]: E1212 17:21:57.676113 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.676162 kubelet[2889]: W1212 17:21:57.676124 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.676162 kubelet[2889]: E1212 17:21:57.676137 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.676569 kubelet[2889]: E1212 17:21:57.676472 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.676569 kubelet[2889]: W1212 17:21:57.676487 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.676569 kubelet[2889]: E1212 17:21:57.676508 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.676916 kubelet[2889]: E1212 17:21:57.676901 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.676980 kubelet[2889]: W1212 17:21:57.676968 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.677077 kubelet[2889]: E1212 17:21:57.677061 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.677369 kubelet[2889]: E1212 17:21:57.677355 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.677473 kubelet[2889]: W1212 17:21:57.677432 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.677702 kubelet[2889]: E1212 17:21:57.677607 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.678336 kubelet[2889]: E1212 17:21:57.678310 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.678336 kubelet[2889]: W1212 17:21:57.678335 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.678449 kubelet[2889]: E1212 17:21:57.678358 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.679315 kubelet[2889]: E1212 17:21:57.679297 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.679347 kubelet[2889]: W1212 17:21:57.679330 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.679376 kubelet[2889]: E1212 17:21:57.679349 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.679586 kubelet[2889]: E1212 17:21:57.679572 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.679586 kubelet[2889]: W1212 17:21:57.679586 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.679647 kubelet[2889]: E1212 17:21:57.679596 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:57.680740 kubelet[2889]: E1212 17:21:57.680720 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:57.680801 kubelet[2889]: W1212 17:21:57.680752 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:57.680801 kubelet[2889]: E1212 17:21:57.680769 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.506424 kubelet[2889]: E1212 17:21:58.506359 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:21:58.581016 kubelet[2889]: I1212 17:21:58.580975 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:21:58.668946 kubelet[2889]: E1212 17:21:58.668914 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.668946 kubelet[2889]: W1212 17:21:58.668938 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669294 kubelet[2889]: E1212 17:21:58.668960 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669294 kubelet[2889]: E1212 17:21:58.669122 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669294 kubelet[2889]: W1212 17:21:58.669130 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669294 kubelet[2889]: E1212 17:21:58.669138 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669294 kubelet[2889]: E1212 17:21:58.669265 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669294 kubelet[2889]: W1212 17:21:58.669272 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669294 kubelet[2889]: E1212 17:21:58.669279 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669519 kubelet[2889]: E1212 17:21:58.669415 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669519 kubelet[2889]: W1212 17:21:58.669424 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669519 kubelet[2889]: E1212 17:21:58.669432 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669594 kubelet[2889]: E1212 17:21:58.669570 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669594 kubelet[2889]: W1212 17:21:58.669587 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669638 kubelet[2889]: E1212 17:21:58.669595 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669715 kubelet[2889]: E1212 17:21:58.669705 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669715 kubelet[2889]: W1212 17:21:58.669714 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669754 kubelet[2889]: E1212 17:21:58.669721 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669843 kubelet[2889]: E1212 17:21:58.669835 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.669868 kubelet[2889]: W1212 17:21:58.669844 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.669868 kubelet[2889]: E1212 17:21:58.669851 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.669981 kubelet[2889]: E1212 17:21:58.669972 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670001 kubelet[2889]: W1212 17:21:58.669981 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670001 kubelet[2889]: E1212 17:21:58.669988 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670119 kubelet[2889]: E1212 17:21:58.670111 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670140 kubelet[2889]: W1212 17:21:58.670119 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670140 kubelet[2889]: E1212 17:21:58.670129 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670246 kubelet[2889]: E1212 17:21:58.670237 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670270 kubelet[2889]: W1212 17:21:58.670246 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670270 kubelet[2889]: E1212 17:21:58.670253 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670368 kubelet[2889]: E1212 17:21:58.670359 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670391 kubelet[2889]: W1212 17:21:58.670367 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670391 kubelet[2889]: E1212 17:21:58.670374 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670522 kubelet[2889]: E1212 17:21:58.670512 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670544 kubelet[2889]: W1212 17:21:58.670522 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670544 kubelet[2889]: E1212 17:21:58.670529 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670659 kubelet[2889]: E1212 17:21:58.670649 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670680 kubelet[2889]: W1212 17:21:58.670659 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670680 kubelet[2889]: E1212 17:21:58.670667 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670790 kubelet[2889]: E1212 17:21:58.670781 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670817 kubelet[2889]: W1212 17:21:58.670790 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670817 kubelet[2889]: E1212 17:21:58.670798 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.670922 kubelet[2889]: E1212 17:21:58.670913 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.670942 kubelet[2889]: W1212 17:21:58.670922 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.670942 kubelet[2889]: E1212 17:21:58.670929 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.681690 kubelet[2889]: E1212 17:21:58.681640 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.681690 kubelet[2889]: W1212 17:21:58.681662 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.681690 kubelet[2889]: E1212 17:21:58.681680 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.682112 kubelet[2889]: E1212 17:21:58.682097 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.682112 kubelet[2889]: W1212 17:21:58.682111 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.682174 kubelet[2889]: E1212 17:21:58.682126 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.682327 kubelet[2889]: E1212 17:21:58.682314 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.682327 kubelet[2889]: W1212 17:21:58.682325 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.682380 kubelet[2889]: E1212 17:21:58.682343 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.682721 kubelet[2889]: E1212 17:21:58.682659 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.684342 kubelet[2889]: W1212 17:21:58.684238 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.684342 kubelet[2889]: E1212 17:21:58.684281 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.684587 kubelet[2889]: E1212 17:21:58.684563 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.684587 kubelet[2889]: W1212 17:21:58.684579 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.684587 kubelet[2889]: E1212 17:21:58.684590 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.684791 kubelet[2889]: E1212 17:21:58.684773 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.684791 kubelet[2889]: W1212 17:21:58.684788 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.684850 kubelet[2889]: E1212 17:21:58.684798 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.685047 kubelet[2889]: E1212 17:21:58.685024 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.685047 kubelet[2889]: W1212 17:21:58.685040 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.685110 kubelet[2889]: E1212 17:21:58.685053 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.689085 kubelet[2889]: E1212 17:21:58.688713 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.689085 kubelet[2889]: W1212 17:21:58.689083 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.689224 kubelet[2889]: E1212 17:21:58.689187 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.689351 kubelet[2889]: E1212 17:21:58.689331 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.689351 kubelet[2889]: W1212 17:21:58.689343 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.689426 kubelet[2889]: E1212 17:21:58.689384 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.689567 kubelet[2889]: E1212 17:21:58.689481 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.689567 kubelet[2889]: W1212 17:21:58.689493 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.689567 kubelet[2889]: E1212 17:21:58.689508 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.689698 kubelet[2889]: E1212 17:21:58.689682 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.689698 kubelet[2889]: W1212 17:21:58.689693 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.689753 kubelet[2889]: E1212 17:21:58.689707 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690165 kubelet[2889]: E1212 17:21:58.689824 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690165 kubelet[2889]: W1212 17:21:58.689835 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690165 kubelet[2889]: E1212 17:21:58.689844 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690165 kubelet[2889]: E1212 17:21:58.689994 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690165 kubelet[2889]: W1212 17:21:58.690001 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690165 kubelet[2889]: E1212 17:21:58.690010 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690335 kubelet[2889]: E1212 17:21:58.690318 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690335 kubelet[2889]: W1212 17:21:58.690331 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690380 kubelet[2889]: E1212 17:21:58.690341 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690500 kubelet[2889]: E1212 17:21:58.690487 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690500 kubelet[2889]: W1212 17:21:58.690498 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690544 kubelet[2889]: E1212 17:21:58.690507 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690646 kubelet[2889]: E1212 17:21:58.690634 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690646 kubelet[2889]: W1212 17:21:58.690645 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690688 kubelet[2889]: E1212 17:21:58.690653 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.690798 kubelet[2889]: E1212 17:21:58.690787 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.690798 kubelet[2889]: W1212 17:21:58.690797 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.690840 kubelet[2889]: E1212 17:21:58.690805 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.691077 kubelet[2889]: E1212 17:21:58.691042 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:21:58.691077 kubelet[2889]: W1212 17:21:58.691055 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:21:58.691077 kubelet[2889]: E1212 17:21:58.691063 2889 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:21:58.941500 containerd[1697]: time="2025-12-12T17:21:58.941443622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:58.943127 containerd[1697]: time="2025-12-12T17:21:58.943080306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:21:58.944218 containerd[1697]: time="2025-12-12T17:21:58.944194429Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:58.946955 containerd[1697]: time="2025-12-12T17:21:58.946708676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:21:58.947370 containerd[1697]: time="2025-12-12T17:21:58.947347357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.504264867s" Dec 12 17:21:58.947448 containerd[1697]: time="2025-12-12T17:21:58.947377597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:21:58.951041 containerd[1697]: time="2025-12-12T17:21:58.951008407Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:21:58.960473 containerd[1697]: time="2025-12-12T17:21:58.959448069Z" level=info msg="Container a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:21:58.969993 containerd[1697]: time="2025-12-12T17:21:58.969854256Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e\"" Dec 12 17:21:58.972796 containerd[1697]: time="2025-12-12T17:21:58.972668663Z" level=info msg="StartContainer for \"a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e\"" Dec 12 17:21:58.974919 containerd[1697]: time="2025-12-12T17:21:58.974855629Z" level=info msg="connecting to shim a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e" address="unix:///run/containerd/s/2a6094ef8c350b2acd322bc5d33dfa7a3baac08828998d74224f06800c3a291c" protocol=ttrpc version=3 Dec 12 17:21:59.002719 systemd[1]: Started cri-containerd-a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e.scope - libcontainer container a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e. Dec 12 17:21:59.062000 audit: BPF prog-id=166 op=LOAD Dec 12 17:21:59.062000 audit[3598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3410 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132653561373130343061363565653164636665623238396261303639 Dec 12 17:21:59.062000 audit: BPF prog-id=167 op=LOAD Dec 12 17:21:59.062000 audit[3598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3410 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132653561373130343061363565653164636665623238396261303639 Dec 12 17:21:59.062000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:21:59.062000 audit[3598]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132653561373130343061363565653164636665623238396261303639 Dec 12 17:21:59.063000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:21:59.063000 audit[3598]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132653561373130343061363565653164636665623238396261303639 Dec 12 17:21:59.063000 audit: BPF prog-id=168 op=LOAD Dec 12 17:21:59.063000 audit[3598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3410 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132653561373130343061363565653164636665623238396261303639 Dec 12 17:21:59.087576 containerd[1697]: time="2025-12-12T17:21:59.087536681Z" level=info msg="StartContainer for \"a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e\" returns successfully" Dec 12 17:21:59.099654 systemd[1]: cri-containerd-a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e.scope: Deactivated successfully. Dec 12 17:21:59.102811 containerd[1697]: time="2025-12-12T17:21:59.102769081Z" level=info msg="received container exit event container_id:\"a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e\" id:\"a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e\" pid:3612 exited_at:{seconds:1765560119 nanos:101852079}" Dec 12 17:21:59.103000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:21:59.130830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2e5a71040a65ee1dcfeb289ba0697908f22db06e8dc104d42401281d3acbc5e-rootfs.mount: Deactivated successfully. Dec 12 17:22:00.505490 kubelet[2889]: E1212 17:22:00.505433 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:02.505884 kubelet[2889]: E1212 17:22:02.505840 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:03.595045 containerd[1697]: time="2025-12-12T17:22:03.594994470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:22:04.505752 kubelet[2889]: E1212 17:22:04.505691 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:06.505826 kubelet[2889]: E1212 17:22:06.505779 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:07.544359 containerd[1697]: time="2025-12-12T17:22:07.544303409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.545476 containerd[1697]: time="2025-12-12T17:22:07.545394972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:22:07.546806 containerd[1697]: time="2025-12-12T17:22:07.546760255Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.549390 containerd[1697]: time="2025-12-12T17:22:07.549333262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.550006 containerd[1697]: time="2025-12-12T17:22:07.549961104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.954922154s" Dec 12 17:22:07.550006 containerd[1697]: time="2025-12-12T17:22:07.549996824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:22:07.553483 containerd[1697]: time="2025-12-12T17:22:07.552930271Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:22:07.564137 containerd[1697]: time="2025-12-12T17:22:07.564065220Z" level=info msg="Container 400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:07.572423 containerd[1697]: time="2025-12-12T17:22:07.572357282Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7\"" Dec 12 17:22:07.573237 containerd[1697]: time="2025-12-12T17:22:07.573208724Z" level=info msg="StartContainer for \"400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7\"" Dec 12 17:22:07.575087 containerd[1697]: time="2025-12-12T17:22:07.575038929Z" level=info msg="connecting to shim 400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7" address="unix:///run/containerd/s/2a6094ef8c350b2acd322bc5d33dfa7a3baac08828998d74224f06800c3a291c" protocol=ttrpc version=3 Dec 12 17:22:07.599664 systemd[1]: Started cri-containerd-400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7.scope - libcontainer container 400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7. Dec 12 17:22:07.672000 audit: BPF prog-id=169 op=LOAD Dec 12 17:22:07.674609 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 12 17:22:07.674677 kernel: audit: type=1334 audit(1765560127.672:567): prog-id=169 op=LOAD Dec 12 17:22:07.672000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.678913 kernel: audit: type=1300 audit(1765560127.672:567): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.679029 kernel: audit: type=1327 audit(1765560127.672:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.672000 audit: BPF prog-id=170 op=LOAD Dec 12 17:22:07.672000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.686810 kernel: audit: type=1334 audit(1765560127.672:568): prog-id=170 op=LOAD Dec 12 17:22:07.686890 kernel: audit: type=1300 audit(1765560127.672:568): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.690794 kernel: audit: type=1327 audit(1765560127.672:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.672000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:22:07.691764 kernel: audit: type=1334 audit(1765560127.672:569): prog-id=170 op=UNLOAD Dec 12 17:22:07.692006 kernel: audit: type=1300 audit(1765560127.672:569): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.672000 audit[3661]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.698616 kernel: audit: type=1327 audit(1765560127.672:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.698727 kernel: audit: type=1334 audit(1765560127.672:570): prog-id=169 op=UNLOAD Dec 12 17:22:07.672000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:22:07.672000 audit[3661]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.672000 audit: BPF prog-id=171 op=LOAD Dec 12 17:22:07.672000 audit[3661]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3410 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:07.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306434376538353030633463323561613033666239623637383137 Dec 12 17:22:07.707891 containerd[1697]: time="2025-12-12T17:22:07.707839434Z" level=info msg="StartContainer for \"400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7\" returns successfully" Dec 12 17:22:08.505891 kubelet[2889]: E1212 17:22:08.505822 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:08.968905 containerd[1697]: time="2025-12-12T17:22:08.968794069Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:22:08.971521 systemd[1]: cri-containerd-400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7.scope: Deactivated successfully. Dec 12 17:22:08.971902 systemd[1]: cri-containerd-400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7.scope: Consumed 493ms CPU time, 188.5M memory peak, 165.9M written to disk. Dec 12 17:22:08.973000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:22:08.976037 containerd[1697]: time="2025-12-12T17:22:08.975903648Z" level=info msg="received container exit event container_id:\"400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7\" id:\"400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7\" pid:3673 exited_at:{seconds:1765560128 nanos:975675807}" Dec 12 17:22:08.995498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-400d47e8500c4c25aa03fb9b6781773a5811d2bfcf9d0c9136b9cfc0887b50e7-rootfs.mount: Deactivated successfully. Dec 12 17:22:09.070318 kubelet[2889]: I1212 17:22:09.070264 2889 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:22:09.364651 kubelet[2889]: I1212 17:22:09.148429 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5ng\" (UniqueName: \"kubernetes.io/projected/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-kube-api-access-zp5ng\") pod \"whisker-695c74d9df-8zj42\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " pod="calico-system/whisker-695c74d9df-8zj42" Dec 12 17:22:09.364651 kubelet[2889]: I1212 17:22:09.148483 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jgh\" (UniqueName: \"kubernetes.io/projected/f675d505-5319-47a1-bc86-409be66cd047-kube-api-access-24jgh\") pod \"calico-kube-controllers-696f66b658-n7nn2\" (UID: \"f675d505-5319-47a1-bc86-409be66cd047\") " pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" Dec 12 17:22:09.364651 kubelet[2889]: I1212 17:22:09.148505 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51905ae7-f059-4931-8cc7-e32bc90c24e4-calico-apiserver-certs\") pod \"calico-apiserver-878796b8-5d5jh\" (UID: \"51905ae7-f059-4931-8cc7-e32bc90c24e4\") " pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" Dec 12 17:22:09.364651 kubelet[2889]: I1212 17:22:09.148523 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shsc\" (UniqueName: \"kubernetes.io/projected/0610226f-a574-4695-bdfb-8c6f86d2fa21-kube-api-access-2shsc\") pod \"coredns-668d6bf9bc-mf99p\" (UID: \"0610226f-a574-4695-bdfb-8c6f86d2fa21\") " pod="kube-system/coredns-668d6bf9bc-mf99p" Dec 12 17:22:09.364651 kubelet[2889]: I1212 17:22:09.148540 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdf7dfd-6bce-4744-a930-376661816277-goldmane-ca-bundle\") pod \"goldmane-666569f655-nzjsj\" (UID: \"0bdf7dfd-6bce-4744-a930-376661816277\") " pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:09.108057 systemd[1]: Created slice kubepods-besteffort-pod6e7780bc_34b6_4688_ae6a_fbd80527fba7.slice - libcontainer container kubepods-besteffort-pod6e7780bc_34b6_4688_ae6a_fbd80527fba7.slice. Dec 12 17:22:09.364901 kubelet[2889]: I1212 17:22:09.148555 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0bdf7dfd-6bce-4744-a930-376661816277-goldmane-key-pair\") pod \"goldmane-666569f655-nzjsj\" (UID: \"0bdf7dfd-6bce-4744-a930-376661816277\") " pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:09.364901 kubelet[2889]: I1212 17:22:09.148571 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr6d\" (UniqueName: \"kubernetes.io/projected/2500d734-eefc-4d8d-acd0-0691f355e183-kube-api-access-9wr6d\") pod \"coredns-668d6bf9bc-dk5m8\" (UID: \"2500d734-eefc-4d8d-acd0-0691f355e183\") " pod="kube-system/coredns-668d6bf9bc-dk5m8" Dec 12 17:22:09.364901 kubelet[2889]: I1212 17:22:09.148592 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-ca-bundle\") pod \"whisker-695c74d9df-8zj42\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " pod="calico-system/whisker-695c74d9df-8zj42" Dec 12 17:22:09.364901 kubelet[2889]: I1212 17:22:09.148651 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpg8\" (UniqueName: \"kubernetes.io/projected/51905ae7-f059-4931-8cc7-e32bc90c24e4-kube-api-access-8xpg8\") pod \"calico-apiserver-878796b8-5d5jh\" (UID: \"51905ae7-f059-4931-8cc7-e32bc90c24e4\") " pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" Dec 12 17:22:09.364901 kubelet[2889]: I1212 17:22:09.148691 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-backend-key-pair\") pod \"whisker-695c74d9df-8zj42\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " pod="calico-system/whisker-695c74d9df-8zj42" Dec 12 17:22:09.115892 systemd[1]: Created slice kubepods-besteffort-podf675d505_5319_47a1_bc86_409be66cd047.slice - libcontainer container kubepods-besteffort-podf675d505_5319_47a1_bc86_409be66cd047.slice. Dec 12 17:22:09.365048 kubelet[2889]: I1212 17:22:09.148714 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0610226f-a574-4695-bdfb-8c6f86d2fa21-config-volume\") pod \"coredns-668d6bf9bc-mf99p\" (UID: \"0610226f-a574-4695-bdfb-8c6f86d2fa21\") " pod="kube-system/coredns-668d6bf9bc-mf99p" Dec 12 17:22:09.365048 kubelet[2889]: I1212 17:22:09.148732 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf7dfd-6bce-4744-a930-376661816277-config\") pod \"goldmane-666569f655-nzjsj\" (UID: \"0bdf7dfd-6bce-4744-a930-376661816277\") " pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:09.365048 kubelet[2889]: I1212 17:22:09.148759 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e7780bc-34b6-4688-ae6a-fbd80527fba7-calico-apiserver-certs\") pod \"calico-apiserver-878796b8-spr8v\" (UID: \"6e7780bc-34b6-4688-ae6a-fbd80527fba7\") " pod="calico-apiserver/calico-apiserver-878796b8-spr8v" Dec 12 17:22:09.365048 kubelet[2889]: I1212 17:22:09.148776 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqmb\" (UniqueName: \"kubernetes.io/projected/6e7780bc-34b6-4688-ae6a-fbd80527fba7-kube-api-access-qrqmb\") pod \"calico-apiserver-878796b8-spr8v\" (UID: \"6e7780bc-34b6-4688-ae6a-fbd80527fba7\") " pod="calico-apiserver/calico-apiserver-878796b8-spr8v" Dec 12 17:22:09.365048 kubelet[2889]: I1212 17:22:09.148792 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f675d505-5319-47a1-bc86-409be66cd047-tigera-ca-bundle\") pod \"calico-kube-controllers-696f66b658-n7nn2\" (UID: \"f675d505-5319-47a1-bc86-409be66cd047\") " pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" Dec 12 17:22:09.124073 systemd[1]: Created slice kubepods-besteffort-pod51905ae7_f059_4931_8cc7_e32bc90c24e4.slice - libcontainer container kubepods-besteffort-pod51905ae7_f059_4931_8cc7_e32bc90c24e4.slice. Dec 12 17:22:09.365190 kubelet[2889]: I1212 17:22:09.148809 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2500d734-eefc-4d8d-acd0-0691f355e183-config-volume\") pod \"coredns-668d6bf9bc-dk5m8\" (UID: \"2500d734-eefc-4d8d-acd0-0691f355e183\") " pod="kube-system/coredns-668d6bf9bc-dk5m8" Dec 12 17:22:09.365190 kubelet[2889]: I1212 17:22:09.148825 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hffrq\" (UniqueName: \"kubernetes.io/projected/0bdf7dfd-6bce-4744-a930-376661816277-kube-api-access-hffrq\") pod \"goldmane-666569f655-nzjsj\" (UID: \"0bdf7dfd-6bce-4744-a930-376661816277\") " pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:09.132122 systemd[1]: Created slice kubepods-besteffort-podafd445e4_a9fa_4d38_9868_b4b4f9a0e006.slice - libcontainer container kubepods-besteffort-podafd445e4_a9fa_4d38_9868_b4b4f9a0e006.slice. Dec 12 17:22:09.136696 systemd[1]: Created slice kubepods-besteffort-pod0bdf7dfd_6bce_4744_a930_376661816277.slice - libcontainer container kubepods-besteffort-pod0bdf7dfd_6bce_4744_a930_376661816277.slice. Dec 12 17:22:09.148689 systemd[1]: Created slice kubepods-burstable-pod0610226f_a574_4695_bdfb_8c6f86d2fa21.slice - libcontainer container kubepods-burstable-pod0610226f_a574_4695_bdfb_8c6f86d2fa21.slice. Dec 12 17:22:09.156479 systemd[1]: Created slice kubepods-burstable-pod2500d734_eefc_4d8d_acd0_0691f355e183.slice - libcontainer container kubepods-burstable-pod2500d734_eefc_4d8d_acd0_0691f355e183.slice. Dec 12 17:22:10.113907 containerd[1697]: time="2025-12-12T17:22:10.113864484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696f66b658-n7nn2,Uid:f675d505-5319-47a1-bc86-409be66cd047,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:10.168537 containerd[1697]: time="2025-12-12T17:22:10.168475346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-spr8v,Uid:6e7780bc-34b6-4688-ae6a-fbd80527fba7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:10.168537 containerd[1697]: time="2025-12-12T17:22:10.168470785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-5d5jh,Uid:51905ae7-f059-4931-8cc7-e32bc90c24e4,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:10.168681 containerd[1697]: time="2025-12-12T17:22:10.168470825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mf99p,Uid:0610226f-a574-4695-bdfb-8c6f86d2fa21,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:10.169995 containerd[1697]: time="2025-12-12T17:22:10.169923229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-695c74d9df-8zj42,Uid:afd445e4-a9fa-4d38-9868-b4b4f9a0e006,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:10.327254 containerd[1697]: time="2025-12-12T17:22:10.327196598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nzjsj,Uid:0bdf7dfd-6bce-4744-a930-376661816277,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:10.411281 containerd[1697]: time="2025-12-12T17:22:10.411158576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk5m8,Uid:2500d734-eefc-4d8d-acd0-0691f355e183,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:10.513766 systemd[1]: Created slice kubepods-besteffort-pod3fd99a4f_5151_4d6d_a968_dc993caff3f6.slice - libcontainer container kubepods-besteffort-pod3fd99a4f_5151_4d6d_a968_dc993caff3f6.slice. Dec 12 17:22:10.516516 containerd[1697]: time="2025-12-12T17:22:10.516483130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79stw,Uid:3fd99a4f-5151-4d6d-a968-dc993caff3f6,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:10.580328 containerd[1697]: time="2025-12-12T17:22:10.580252935Z" level=error msg="Failed to destroy network for sandbox \"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.581635 containerd[1697]: time="2025-12-12T17:22:10.581600459Z" level=error msg="Failed to destroy network for sandbox \"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.583180 containerd[1697]: time="2025-12-12T17:22:10.583143023Z" level=error msg="Failed to destroy network for sandbox \"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.584839 containerd[1697]: time="2025-12-12T17:22:10.584799987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-spr8v,Uid:6e7780bc-34b6-4688-ae6a-fbd80527fba7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.585303 kubelet[2889]: E1212 17:22:10.585249 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.585740 kubelet[2889]: E1212 17:22:10.585336 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" Dec 12 17:22:10.585740 kubelet[2889]: E1212 17:22:10.585356 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" Dec 12 17:22:10.585740 kubelet[2889]: E1212 17:22:10.585425 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01e9bb4ca14dc795eebe843799c3e1fec244e4b2fb69bf22f1b2e4450819baf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:10.585987 containerd[1697]: time="2025-12-12T17:22:10.585013828Z" level=error msg="Failed to destroy network for sandbox \"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.586357 containerd[1697]: time="2025-12-12T17:22:10.585851910Z" level=error msg="Failed to destroy network for sandbox \"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.587364 containerd[1697]: time="2025-12-12T17:22:10.587317754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-695c74d9df-8zj42,Uid:afd445e4-a9fa-4d38-9868-b4b4f9a0e006,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.587975 kubelet[2889]: E1212 17:22:10.587939 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.588040 kubelet[2889]: E1212 17:22:10.587995 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-695c74d9df-8zj42" Dec 12 17:22:10.588040 kubelet[2889]: E1212 17:22:10.588015 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-695c74d9df-8zj42" Dec 12 17:22:10.588329 kubelet[2889]: E1212 17:22:10.588049 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-695c74d9df-8zj42_calico-system(afd445e4-a9fa-4d38-9868-b4b4f9a0e006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-695c74d9df-8zj42_calico-system(afd445e4-a9fa-4d38-9868-b4b4f9a0e006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0b358c311876095ada21517814d4547034e6a0e8affe2c4205af47c91a8e9c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-695c74d9df-8zj42" podUID="afd445e4-a9fa-4d38-9868-b4b4f9a0e006" Dec 12 17:22:10.590795 containerd[1697]: time="2025-12-12T17:22:10.590739162Z" level=error msg="Failed to destroy network for sandbox \"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.593197 containerd[1697]: time="2025-12-12T17:22:10.593147689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-5d5jh,Uid:51905ae7-f059-4931-8cc7-e32bc90c24e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.593697 kubelet[2889]: E1212 17:22:10.593360 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.593697 kubelet[2889]: E1212 17:22:10.593587 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" Dec 12 17:22:10.594059 kubelet[2889]: E1212 17:22:10.593614 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" Dec 12 17:22:10.594059 kubelet[2889]: E1212 17:22:10.594020 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67e1d4d0dc4586da336729dfbb2b63d8f89e644d3cb097ce1ed1aaa17470fab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:10.594712 containerd[1697]: time="2025-12-12T17:22:10.594677053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk5m8,Uid:2500d734-eefc-4d8d-acd0-0691f355e183,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.595180 containerd[1697]: time="2025-12-12T17:22:10.594704133Z" level=error msg="Failed to destroy network for sandbox \"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.595242 kubelet[2889]: E1212 17:22:10.595127 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.595503 kubelet[2889]: E1212 17:22:10.595296 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dk5m8" Dec 12 17:22:10.595503 kubelet[2889]: E1212 17:22:10.595321 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dk5m8" Dec 12 17:22:10.595503 kubelet[2889]: E1212 17:22:10.595459 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dk5m8_kube-system(2500d734-eefc-4d8d-acd0-0691f355e183)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dk5m8_kube-system(2500d734-eefc-4d8d-acd0-0691f355e183)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e8161ae4ca7af05880d9f3e0ad3ec577ff70ba3cc380e2742f40b9bf9eb35fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dk5m8" podUID="2500d734-eefc-4d8d-acd0-0691f355e183" Dec 12 17:22:10.596622 containerd[1697]: time="2025-12-12T17:22:10.596580138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696f66b658-n7nn2,Uid:f675d505-5319-47a1-bc86-409be66cd047,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.597644 kubelet[2889]: E1212 17:22:10.597606 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.597644 kubelet[2889]: E1212 17:22:10.597654 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" Dec 12 17:22:10.597644 kubelet[2889]: E1212 17:22:10.597671 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" Dec 12 17:22:10.598667 kubelet[2889]: E1212 17:22:10.597702 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5c2806e4916d4136f70e5aa216a67f0ca625debaca825f95e8d242a7650204c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:10.598667 kubelet[2889]: E1212 17:22:10.598646 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.598783 containerd[1697]: time="2025-12-12T17:22:10.598443342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mf99p,Uid:0610226f-a574-4695-bdfb-8c6f86d2fa21,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.598828 kubelet[2889]: E1212 17:22:10.598688 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mf99p" Dec 12 17:22:10.598828 kubelet[2889]: E1212 17:22:10.598730 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mf99p" Dec 12 17:22:10.598828 kubelet[2889]: E1212 17:22:10.598768 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mf99p_kube-system(0610226f-a574-4695-bdfb-8c6f86d2fa21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mf99p_kube-system(0610226f-a574-4695-bdfb-8c6f86d2fa21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"927518b195b8e4cb07b5e28a4696d8d8553189b1bebf25f7e1fd46c0f67bca29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mf99p" podUID="0610226f-a574-4695-bdfb-8c6f86d2fa21" Dec 12 17:22:10.599890 containerd[1697]: time="2025-12-12T17:22:10.599851426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nzjsj,Uid:0bdf7dfd-6bce-4744-a930-376661816277,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.600069 kubelet[2889]: E1212 17:22:10.600038 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.600179 kubelet[2889]: E1212 17:22:10.600160 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:10.600411 kubelet[2889]: E1212 17:22:10.600230 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nzjsj" Dec 12 17:22:10.600411 kubelet[2889]: E1212 17:22:10.600265 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff7b7001bd4c78782d7b27b580df29b639a4f895d2882b01ffaba26815a3e8ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:10.607139 containerd[1697]: time="2025-12-12T17:22:10.607028405Z" level=error msg="Failed to destroy network for sandbox \"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.609820 containerd[1697]: time="2025-12-12T17:22:10.609318731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79stw,Uid:3fd99a4f-5151-4d6d-a968-dc993caff3f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.610168 kubelet[2889]: E1212 17:22:10.610131 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:10.610228 kubelet[2889]: E1212 17:22:10.610185 2889 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79stw" Dec 12 17:22:10.610228 kubelet[2889]: E1212 17:22:10.610203 2889 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79stw" Dec 12 17:22:10.610285 kubelet[2889]: E1212 17:22:10.610256 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc8af5affba18cf3c1017bad0889d63afe57fd95708b2bef77c811d6b553420\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:10.620670 containerd[1697]: time="2025-12-12T17:22:10.620558920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:22:11.113060 systemd[1]: run-netns-cni\x2dedceb1c4\x2dadd1\x2d6764\x2d516c\x2d9b43ef7af1e1.mount: Deactivated successfully. Dec 12 17:22:11.113153 systemd[1]: run-netns-cni\x2d6038c2e8\x2d47b2\x2dcd1a\x2d348e\x2d998dc2f2ec60.mount: Deactivated successfully. Dec 12 17:22:11.113204 systemd[1]: run-netns-cni\x2dfb556a91\x2d42cc\x2df26b\x2d730d\x2d60105924c55f.mount: Deactivated successfully. Dec 12 17:22:15.247496 kubelet[2889]: I1212 17:22:15.247330 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:22:15.272000 audit[4008]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:15.274949 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 17:22:15.275028 kernel: audit: type=1325 audit(1765560135.272:573): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:15.272000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd2de6d30 a2=0 a3=1 items=0 ppid=2997 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.281600 kernel: audit: type=1300 audit(1765560135.272:573): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd2de6d30 a2=0 a3=1 items=0 ppid=2997 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.281827 kernel: audit: type=1327 audit(1765560135.272:573): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:15.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:15.285000 audit[4008]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:15.285000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd2de6d30 a2=0 a3=1 items=0 ppid=2997 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.293238 kernel: audit: type=1325 audit(1765560135.285:574): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:15.293335 kernel: audit: type=1300 audit(1765560135.285:574): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd2de6d30 a2=0 a3=1 items=0 ppid=2997 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.293360 kernel: audit: type=1327 audit(1765560135.285:574): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:15.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:17.695896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286277905.mount: Deactivated successfully. Dec 12 17:22:17.716272 containerd[1697]: time="2025-12-12T17:22:17.716217192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:17.717574 containerd[1697]: time="2025-12-12T17:22:17.717507555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:22:17.718647 containerd[1697]: time="2025-12-12T17:22:17.718591998Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:17.720924 containerd[1697]: time="2025-12-12T17:22:17.720869204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:17.721582 containerd[1697]: time="2025-12-12T17:22:17.721544286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.100863046s" Dec 12 17:22:17.721633 containerd[1697]: time="2025-12-12T17:22:17.721584246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:22:17.732038 containerd[1697]: time="2025-12-12T17:22:17.731990513Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:22:17.745730 containerd[1697]: time="2025-12-12T17:22:17.745674508Z" level=info msg="Container e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:17.755801 containerd[1697]: time="2025-12-12T17:22:17.755746494Z" level=info msg="CreateContainer within sandbox \"1a8a9e326da7daf8606b04c2ffdf99cc18cc543f76d2356c86afcf3bb88d5bdf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2\"" Dec 12 17:22:17.756291 containerd[1697]: time="2025-12-12T17:22:17.756260776Z" level=info msg="StartContainer for \"e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2\"" Dec 12 17:22:17.759339 containerd[1697]: time="2025-12-12T17:22:17.759015543Z" level=info msg="connecting to shim e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2" address="unix:///run/containerd/s/2a6094ef8c350b2acd322bc5d33dfa7a3baac08828998d74224f06800c3a291c" protocol=ttrpc version=3 Dec 12 17:22:17.780664 systemd[1]: Started cri-containerd-e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2.scope - libcontainer container e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2. Dec 12 17:22:17.845000 audit: BPF prog-id=172 op=LOAD Dec 12 17:22:17.845000 audit[4015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.851755 kernel: audit: type=1334 audit(1765560137.845:575): prog-id=172 op=LOAD Dec 12 17:22:17.851830 kernel: audit: type=1300 audit(1765560137.845:575): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.851851 kernel: audit: type=1327 audit(1765560137.845:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.855149 kernel: audit: type=1334 audit(1765560137.846:576): prog-id=173 op=LOAD Dec 12 17:22:17.846000 audit: BPF prog-id=173 op=LOAD Dec 12 17:22:17.846000 audit[4015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.846000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:22:17.846000 audit[4015]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.846000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:22:17.846000 audit[4015]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.846000 audit: BPF prog-id=174 op=LOAD Dec 12 17:22:17.846000 audit[4015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3410 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533623431343434616233643038393562383163353333353738343735 Dec 12 17:22:17.877584 containerd[1697]: time="2025-12-12T17:22:17.877534251Z" level=info msg="StartContainer for \"e3b41444ab3d0895b81c533578475360e48b3097b16496f700362d17e3021ba2\" returns successfully" Dec 12 17:22:18.016462 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:22:18.016571 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:22:18.217768 kubelet[2889]: I1212 17:22:18.217724 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5ng\" (UniqueName: \"kubernetes.io/projected/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-kube-api-access-zp5ng\") pod \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " Dec 12 17:22:18.217768 kubelet[2889]: I1212 17:22:18.217771 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-backend-key-pair\") pod \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " Dec 12 17:22:18.218286 kubelet[2889]: I1212 17:22:18.217800 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-ca-bundle\") pod \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\" (UID: \"afd445e4-a9fa-4d38-9868-b4b4f9a0e006\") " Dec 12 17:22:18.218286 kubelet[2889]: I1212 17:22:18.218120 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "afd445e4-a9fa-4d38-9868-b4b4f9a0e006" (UID: "afd445e4-a9fa-4d38-9868-b4b4f9a0e006"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:22:18.220888 kubelet[2889]: I1212 17:22:18.220838 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "afd445e4-a9fa-4d38-9868-b4b4f9a0e006" (UID: "afd445e4-a9fa-4d38-9868-b4b4f9a0e006"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:22:18.220977 kubelet[2889]: I1212 17:22:18.220841 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-kube-api-access-zp5ng" (OuterVolumeSpecName: "kube-api-access-zp5ng") pod "afd445e4-a9fa-4d38-9868-b4b4f9a0e006" (UID: "afd445e4-a9fa-4d38-9868-b4b4f9a0e006"). InnerVolumeSpecName "kube-api-access-zp5ng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:22:18.318721 kubelet[2889]: I1212 17:22:18.318676 2889 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zp5ng\" (UniqueName: \"kubernetes.io/projected/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-kube-api-access-zp5ng\") on node \"ci-4515-1-0-8-acd31a5336\" DevicePath \"\"" Dec 12 17:22:18.318721 kubelet[2889]: I1212 17:22:18.318714 2889 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-backend-key-pair\") on node \"ci-4515-1-0-8-acd31a5336\" DevicePath \"\"" Dec 12 17:22:18.318721 kubelet[2889]: I1212 17:22:18.318725 2889 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afd445e4-a9fa-4d38-9868-b4b4f9a0e006-whisker-ca-bundle\") on node \"ci-4515-1-0-8-acd31a5336\" DevicePath \"\"" Dec 12 17:22:18.652353 systemd[1]: Removed slice kubepods-besteffort-podafd445e4_a9fa_4d38_9868_b4b4f9a0e006.slice - libcontainer container kubepods-besteffort-podafd445e4_a9fa_4d38_9868_b4b4f9a0e006.slice. Dec 12 17:22:18.677312 kubelet[2889]: I1212 17:22:18.677245 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n5wv7" podStartSLOduration=2.247595342 podStartE2EDuration="24.677227888s" podCreationTimestamp="2025-12-12 17:21:54 +0000 UTC" firstStartedPulling="2025-12-12 17:21:55.292697182 +0000 UTC m=+24.015745043" lastFinishedPulling="2025-12-12 17:22:17.722329768 +0000 UTC m=+46.445377589" observedRunningTime="2025-12-12 17:22:18.664192574 +0000 UTC m=+47.387240435" watchObservedRunningTime="2025-12-12 17:22:18.677227888 +0000 UTC m=+47.400275709" Dec 12 17:22:18.699512 systemd[1]: var-lib-kubelet-pods-afd445e4\x2da9fa\x2d4d38\x2d9868\x2db4b4f9a0e006-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:22:18.699633 systemd[1]: var-lib-kubelet-pods-afd445e4\x2da9fa\x2d4d38\x2d9868\x2db4b4f9a0e006-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp5ng.mount: Deactivated successfully. Dec 12 17:22:18.725192 systemd[1]: Created slice kubepods-besteffort-pod785c11bf_7217_4d42_afaf_b9c091f491b5.slice - libcontainer container kubepods-besteffort-pod785c11bf_7217_4d42_afaf_b9c091f491b5.slice. Dec 12 17:22:18.822560 kubelet[2889]: I1212 17:22:18.822449 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/785c11bf-7217-4d42-afaf-b9c091f491b5-whisker-ca-bundle\") pod \"whisker-58c79b5c7c-dvc69\" (UID: \"785c11bf-7217-4d42-afaf-b9c091f491b5\") " pod="calico-system/whisker-58c79b5c7c-dvc69" Dec 12 17:22:18.822560 kubelet[2889]: I1212 17:22:18.822552 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/785c11bf-7217-4d42-afaf-b9c091f491b5-whisker-backend-key-pair\") pod \"whisker-58c79b5c7c-dvc69\" (UID: \"785c11bf-7217-4d42-afaf-b9c091f491b5\") " pod="calico-system/whisker-58c79b5c7c-dvc69" Dec 12 17:22:18.822560 kubelet[2889]: I1212 17:22:18.822575 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqfw\" (UniqueName: \"kubernetes.io/projected/785c11bf-7217-4d42-afaf-b9c091f491b5-kube-api-access-mvqfw\") pod \"whisker-58c79b5c7c-dvc69\" (UID: \"785c11bf-7217-4d42-afaf-b9c091f491b5\") " pod="calico-system/whisker-58c79b5c7c-dvc69" Dec 12 17:22:19.030271 containerd[1697]: time="2025-12-12T17:22:19.029801324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c79b5c7c-dvc69,Uid:785c11bf-7217-4d42-afaf-b9c091f491b5,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:19.164581 systemd-networkd[1601]: cali6165b6153e8: Link UP Dec 12 17:22:19.165027 systemd-networkd[1601]: cali6165b6153e8: Gained carrier Dec 12 17:22:19.177929 containerd[1697]: 2025-12-12 17:22:19.052 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:22:19.177929 containerd[1697]: 2025-12-12 17:22:19.071 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0 whisker-58c79b5c7c- calico-system 785c11bf-7217-4d42-afaf-b9c091f491b5 881 0 2025-12-12 17:22:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58c79b5c7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 whisker-58c79b5c7c-dvc69 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6165b6153e8 [] [] }} ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-" Dec 12 17:22:19.177929 containerd[1697]: 2025-12-12 17:22:19.071 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.177929 containerd[1697]: 2025-12-12 17:22:19.118 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" HandleID="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Workload="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.118 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" HandleID="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Workload="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"whisker-58c79b5c7c-dvc69", "timestamp":"2025-12-12 17:22:19.118125433 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.118 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.119 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.119 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.129 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.134 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.139 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.141 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178165 containerd[1697]: 2025-12-12 17:22:19.143 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.143 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.145 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.149 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.154 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.193/26] block=192.168.25.192/26 handle="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.154 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.193/26] handle="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.154 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:19.178347 containerd[1697]: 2025-12-12 17:22:19.154 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.193/26] IPv6=[] ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" HandleID="k8s-pod-network.238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Workload="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.178498 containerd[1697]: 2025-12-12 17:22:19.157 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0", GenerateName:"whisker-58c79b5c7c-", Namespace:"calico-system", SelfLink:"", UID:"785c11bf-7217-4d42-afaf-b9c091f491b5", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58c79b5c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"whisker-58c79b5c7c-dvc69", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6165b6153e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:19.178498 containerd[1697]: 2025-12-12 17:22:19.157 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.193/32] ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.178568 containerd[1697]: 2025-12-12 17:22:19.157 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6165b6153e8 ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.178568 containerd[1697]: 2025-12-12 17:22:19.165 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.178609 containerd[1697]: 2025-12-12 17:22:19.166 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0", GenerateName:"whisker-58c79b5c7c-", Namespace:"calico-system", SelfLink:"", UID:"785c11bf-7217-4d42-afaf-b9c091f491b5", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58c79b5c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf", Pod:"whisker-58c79b5c7c-dvc69", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6165b6153e8", MAC:"ca:31:28:ba:b1:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:19.178656 containerd[1697]: 2025-12-12 17:22:19.175 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" Namespace="calico-system" Pod="whisker-58c79b5c7c-dvc69" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-whisker--58c79b5c7c--dvc69-eth0" Dec 12 17:22:19.200792 containerd[1697]: time="2025-12-12T17:22:19.200733968Z" level=info msg="connecting to shim 238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf" address="unix:///run/containerd/s/2d713d885bfc142f28d17527bbd990e4febaaad7b48c52084da83a2e050f5669" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:19.225613 systemd[1]: Started cri-containerd-238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf.scope - libcontainer container 238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf. Dec 12 17:22:19.234000 audit: BPF prog-id=175 op=LOAD Dec 12 17:22:19.234000 audit: BPF prog-id=176 op=LOAD Dec 12 17:22:19.234000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=177 op=LOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=178 op=LOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.235000 audit: BPF prog-id=179 op=LOAD Dec 12 17:22:19.235000 audit[4155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4144 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233386232633237643662626138383035663732633865383438666235 Dec 12 17:22:19.257096 containerd[1697]: time="2025-12-12T17:22:19.257055634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c79b5c7c-dvc69,Uid:785c11bf-7217-4d42-afaf-b9c091f491b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"238b2c27d6bba8805f72c8e848fb56c210860cbfd4320dce24b26633cceb69cf\"" Dec 12 17:22:19.259453 containerd[1697]: time="2025-12-12T17:22:19.259350960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:22:19.510388 kubelet[2889]: I1212 17:22:19.510331 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd445e4-a9fa-4d38-9868-b4b4f9a0e006" path="/var/lib/kubelet/pods/afd445e4-a9fa-4d38-9868-b4b4f9a0e006/volumes" Dec 12 17:22:19.526000 audit: BPF prog-id=180 op=LOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9034a48 a2=98 a3=ffffc9034a38 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.526000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9034a18 a3=0 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.526000 audit: BPF prog-id=181 op=LOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc90348f8 a2=74 a3=95 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.526000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.526000 audit: BPF prog-id=182 op=LOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9034928 a2=40 a3=ffffc9034958 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.526000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:22:19.526000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc9034958 items=0 ppid=4202 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:22:19.528000 audit: BPF prog-id=183 op=LOAD Dec 12 17:22:19.528000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff125ce48 a2=98 a3=fffff125ce38 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.528000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:22:19.528000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff125ce18 a3=0 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.529000 audit: BPF prog-id=184 op=LOAD Dec 12 17:22:19.529000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff125cad8 a2=74 a3=95 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.529000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:22:19.529000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.529000 audit: BPF prog-id=185 op=LOAD Dec 12 17:22:19.529000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff125cb38 a2=94 a3=2 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.529000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:22:19.529000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.616782 containerd[1697]: time="2025-12-12T17:22:19.616684608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:19.617801 containerd[1697]: time="2025-12-12T17:22:19.617725971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:22:19.617888 containerd[1697]: time="2025-12-12T17:22:19.617803931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:19.618108 kubelet[2889]: E1212 17:22:19.618050 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:19.618176 kubelet[2889]: E1212 17:22:19.618103 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:19.618348 kubelet[2889]: E1212 17:22:19.618314 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:036088a0863247c4915e9fb15ee70601,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:19.620225 containerd[1697]: time="2025-12-12T17:22:19.620169977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:22:19.633000 audit: BPF prog-id=186 op=LOAD Dec 12 17:22:19.633000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff125caf8 a2=40 a3=fffff125cb28 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.633000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.633000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:22:19.633000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff125cb28 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.633000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=187 op=LOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff125cb08 a2=94 a3=4 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=188 op=LOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff125c948 a2=94 a3=5 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=189 op=LOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff125cb78 a2=94 a3=6 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.644000 audit: BPF prog-id=190 op=LOAD Dec 12 17:22:19.644000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff125c348 a2=94 a3=83 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.645000 audit: BPF prog-id=191 op=LOAD Dec 12 17:22:19.645000 audit[4296]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff125c108 a2=94 a3=2 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.645000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:22:19.645000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.645000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:22:19.645000 audit[4296]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=41b2620 a3=41a5b00 items=0 ppid=4202 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:22:19.656000 audit: BPF prog-id=192 op=LOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1531ee8 a2=98 a3=ffffc1531ed8 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.656000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc1531eb8 a3=0 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.656000 audit: BPF prog-id=193 op=LOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1531d98 a2=74 a3=95 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.656000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.656000 audit: BPF prog-id=194 op=LOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc1531dc8 a2=40 a3=ffffc1531df8 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.656000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:22:19.656000 audit[4336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc1531df8 items=0 ppid=4202 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:22:19.728632 systemd-networkd[1601]: vxlan.calico: Link UP Dec 12 17:22:19.728644 systemd-networkd[1601]: vxlan.calico: Gained carrier Dec 12 17:22:19.746000 audit: BPF prog-id=195 op=LOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb503688 a2=98 a3=ffffeb503678 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffeb503658 a3=0 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=196 op=LOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb503368 a2=74 a3=95 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=197 op=LOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb5033c8 a2=94 a3=2 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=198 op=LOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb503248 a2=40 a3=ffffeb503278 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffeb503278 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=199 op=LOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb503398 a2=94 a3=b7 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.746000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:22:19.746000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.746000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.747000 audit: BPF prog-id=200 op=LOAD Dec 12 17:22:19.747000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb502a48 a2=94 a3=2 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.747000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:22:19.747000 audit[4369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.747000 audit: BPF prog-id=201 op=LOAD Dec 12 17:22:19.747000 audit[4369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb502bd8 a2=94 a3=30 items=0 ppid=4202 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:22:19.752000 audit: BPF prog-id=202 op=LOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5bde598 a2=98 a3=ffffc5bde588 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.752000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc5bde568 a3=0 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.752000 audit: BPF prog-id=203 op=LOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5bde228 a2=74 a3=95 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.752000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.752000 audit: BPF prog-id=204 op=LOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5bde288 a2=94 a3=2 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.752000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:22:19.752000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.752000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.854000 audit: BPF prog-id=205 op=LOAD Dec 12 17:22:19.854000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5bde248 a2=40 a3=ffffc5bde278 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.854000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.855000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:22:19.855000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc5bde278 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.855000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.867000 audit: BPF prog-id=206 op=LOAD Dec 12 17:22:19.867000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5bde258 a2=94 a3=4 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.867000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.867000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:22:19.867000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.867000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.867000 audit: BPF prog-id=207 op=LOAD Dec 12 17:22:19.867000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc5bde098 a2=94 a3=5 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.867000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.867000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:22:19.867000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.867000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.867000 audit: BPF prog-id=208 op=LOAD Dec 12 17:22:19.867000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5bde2c8 a2=94 a3=6 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.867000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.868000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:22:19.868000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.868000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.868000 audit: BPF prog-id=209 op=LOAD Dec 12 17:22:19.868000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5bdda98 a2=94 a3=83 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.868000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.869000 audit: BPF prog-id=210 op=LOAD Dec 12 17:22:19.869000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc5bdd858 a2=94 a3=2 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.869000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:22:19.869000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.869000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:22:19.869000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2c3c3620 a3=2c3b6b00 items=0 ppid=4202 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:22:19.877000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:22:19.877000 audit[4202]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000961380 a2=0 a3=0 items=0 ppid=4181 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.877000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:22:19.937000 audit[4399]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4399 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:19.937000 audit[4399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffca70abe0 a2=0 a3=ffffab435fa8 items=0 ppid=4202 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.937000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:19.940000 audit[4400]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4400 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:19.940000 audit[4400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd0771040 a2=0 a3=ffffb0e74fa8 items=0 ppid=4202 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:19.945460 containerd[1697]: time="2025-12-12T17:22:19.945363742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:19.946000 audit[4398]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:19.946000 audit[4398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff03519d0 a2=0 a3=ffffb793afa8 items=0 ppid=4202 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.946000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:19.949872 containerd[1697]: time="2025-12-12T17:22:19.949817634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:22:19.950057 containerd[1697]: time="2025-12-12T17:22:19.949915234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:19.950542 kubelet[2889]: E1212 17:22:19.950171 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:19.950542 kubelet[2889]: E1212 17:22:19.950361 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:19.950647 kubelet[2889]: E1212 17:22:19.950492 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:19.952642 kubelet[2889]: E1212 17:22:19.952571 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:22:19.950000 audit[4402]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:19.950000 audit[4402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc7f9d8e0 a2=0 a3=ffff8811ffa8 items=0 ppid=4202 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:19.950000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:20.475629 systemd-networkd[1601]: cali6165b6153e8: Gained IPv6LL Dec 12 17:22:20.645419 kubelet[2889]: E1212 17:22:20.645201 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:22:20.663000 audit[4413]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:20.668115 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 12 17:22:20.668205 kernel: audit: type=1325 audit(1765560140.663:654): table=filter:125 family=2 entries=20 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:20.668245 kernel: audit: type=1300 audit(1765560140.663:654): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb41ffa0 a2=0 a3=1 items=0 ppid=2997 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:20.663000 audit[4413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb41ffa0 a2=0 a3=1 items=0 ppid=2997 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:20.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:20.674402 kernel: audit: type=1327 audit(1765560140.663:654): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:20.675000 audit[4413]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:20.675000 audit[4413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcb41ffa0 a2=0 a3=1 items=0 ppid=2997 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:20.682759 kernel: audit: type=1325 audit(1765560140.675:655): table=nat:126 family=2 entries=14 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:20.682867 kernel: audit: type=1300 audit(1765560140.675:655): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcb41ffa0 a2=0 a3=1 items=0 ppid=2997 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:20.682917 kernel: audit: type=1327 audit(1765560140.675:655): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:20.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:21.507579 containerd[1697]: time="2025-12-12T17:22:21.507511360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696f66b658-n7nn2,Uid:f675d505-5319-47a1-bc86-409be66cd047,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:21.562536 systemd-networkd[1601]: vxlan.calico: Gained IPv6LL Dec 12 17:22:21.641884 systemd-networkd[1601]: caliccd6a8009c6: Link UP Dec 12 17:22:21.642358 systemd-networkd[1601]: caliccd6a8009c6: Gained carrier Dec 12 17:22:21.662461 containerd[1697]: 2025-12-12 17:22:21.549 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0 calico-kube-controllers-696f66b658- calico-system f675d505-5319-47a1-bc86-409be66cd047 811 0 2025-12-12 17:21:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:696f66b658 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 calico-kube-controllers-696f66b658-n7nn2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccd6a8009c6 [] [] }} ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-" Dec 12 17:22:21.662461 containerd[1697]: 2025-12-12 17:22:21.549 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.662461 containerd[1697]: 2025-12-12 17:22:21.579 [INFO][4433] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" HandleID="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.579 [INFO][4433] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" HandleID="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"calico-kube-controllers-696f66b658-n7nn2", "timestamp":"2025-12-12 17:22:21.579562947 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.579 [INFO][4433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.579 [INFO][4433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.579 [INFO][4433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.594 [INFO][4433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.599 [INFO][4433] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.607 [INFO][4433] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.610 [INFO][4433] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.662810 containerd[1697]: 2025-12-12 17:22:21.613 [INFO][4433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.613 [INFO][4433] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.616 [INFO][4433] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1 Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.623 [INFO][4433] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.632 [INFO][4433] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.194/26] block=192.168.25.192/26 handle="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.632 [INFO][4433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.194/26] handle="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.632 [INFO][4433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:21.663036 containerd[1697]: 2025-12-12 17:22:21.632 [INFO][4433] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.194/26] IPv6=[] ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" HandleID="k8s-pod-network.4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.663184 containerd[1697]: 2025-12-12 17:22:21.639 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0", GenerateName:"calico-kube-controllers-696f66b658-", Namespace:"calico-system", SelfLink:"", UID:"f675d505-5319-47a1-bc86-409be66cd047", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696f66b658", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"calico-kube-controllers-696f66b658-n7nn2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd6a8009c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:21.663250 containerd[1697]: 2025-12-12 17:22:21.639 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.194/32] ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.663250 containerd[1697]: 2025-12-12 17:22:21.639 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd6a8009c6 ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.663250 containerd[1697]: 2025-12-12 17:22:21.643 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.663326 containerd[1697]: 2025-12-12 17:22:21.645 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0", GenerateName:"calico-kube-controllers-696f66b658-", Namespace:"calico-system", SelfLink:"", UID:"f675d505-5319-47a1-bc86-409be66cd047", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"696f66b658", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1", Pod:"calico-kube-controllers-696f66b658-n7nn2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccd6a8009c6", MAC:"6a:56:89:66:61:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:21.663381 containerd[1697]: 2025-12-12 17:22:21.660 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" Namespace="calico-system" Pod="calico-kube-controllers-696f66b658-n7nn2" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--kube--controllers--696f66b658--n7nn2-eth0" Dec 12 17:22:21.670000 audit[4449]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:21.670000 audit[4449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffcbcc6b80 a2=0 a3=ffffb28a7fa8 items=0 ppid=4202 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.678547 kernel: audit: type=1325 audit(1765560141.670:656): table=filter:127 family=2 entries=36 op=nft_register_chain pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:21.678606 kernel: audit: type=1300 audit(1765560141.670:656): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffcbcc6b80 a2=0 a3=ffffb28a7fa8 items=0 ppid=4202 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.670000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:21.681093 kernel: audit: type=1327 audit(1765560141.670:656): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:21.694472 containerd[1697]: time="2025-12-12T17:22:21.694425966Z" level=info msg="connecting to shim 4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1" address="unix:///run/containerd/s/fef7c1dc686faedb53974e08ba5d9c128657aeb5f5537d4481bf07ca08299532" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:21.716663 systemd[1]: Started cri-containerd-4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1.scope - libcontainer container 4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1. Dec 12 17:22:21.725000 audit: BPF prog-id=211 op=LOAD Dec 12 17:22:21.728449 kernel: audit: type=1334 audit(1765560141.725:657): prog-id=211 op=LOAD Dec 12 17:22:21.727000 audit: BPF prog-id=212 op=LOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=213 op=LOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=214 op=LOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.727000 audit: BPF prog-id=215 op=LOAD Dec 12 17:22:21.727000 audit[4470]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4459 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313162393333646561656139626333376237366335623063373535 Dec 12 17:22:21.755706 containerd[1697]: time="2025-12-12T17:22:21.755587644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-696f66b658-n7nn2,Uid:f675d505-5319-47a1-bc86-409be66cd047,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a11b933deaea9bc37b76c5b0c7556990c12bdb6b5cb540bec2a7d1a800295b1\"" Dec 12 17:22:21.757098 containerd[1697]: time="2025-12-12T17:22:21.757047648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:22:22.107196 containerd[1697]: time="2025-12-12T17:22:22.107131918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:22.108736 containerd[1697]: time="2025-12-12T17:22:22.108689202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:22:22.108819 containerd[1697]: time="2025-12-12T17:22:22.108784922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:22.109028 kubelet[2889]: E1212 17:22:22.108991 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:22:22.109301 kubelet[2889]: E1212 17:22:22.109042 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:22:22.109301 kubelet[2889]: E1212 17:22:22.109170 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24jgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:22.111256 kubelet[2889]: E1212 17:22:22.111212 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:22.506377 containerd[1697]: time="2025-12-12T17:22:22.506130554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-spr8v,Uid:6e7780bc-34b6-4688-ae6a-fbd80527fba7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:22.506377 containerd[1697]: time="2025-12-12T17:22:22.506269394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79stw,Uid:3fd99a4f-5151-4d6d-a968-dc993caff3f6,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:22.617826 systemd-networkd[1601]: calic5ada3d61ff: Link UP Dec 12 17:22:22.618010 systemd-networkd[1601]: calic5ada3d61ff: Gained carrier Dec 12 17:22:22.633253 containerd[1697]: 2025-12-12 17:22:22.556 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0 calico-apiserver-878796b8- calico-apiserver 6e7780bc-34b6-4688-ae6a-fbd80527fba7 802 0 2025-12-12 17:21:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:878796b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 calico-apiserver-878796b8-spr8v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5ada3d61ff [] [] }} ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-" Dec 12 17:22:22.633253 containerd[1697]: 2025-12-12 17:22:22.556 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.633253 containerd[1697]: 2025-12-12 17:22:22.578 [INFO][4532] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" HandleID="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.578 [INFO][4532] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" HandleID="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005aab80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-8-acd31a5336", "pod":"calico-apiserver-878796b8-spr8v", "timestamp":"2025-12-12 17:22:22.578544822 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4532] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4532] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.588 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.592 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.597 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.598 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.633853 containerd[1697]: 2025-12-12 17:22:22.602 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.602 [INFO][4532] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.603 [INFO][4532] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.607 [INFO][4532] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4532] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.195/26] block=192.168.25.192/26 handle="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.195/26] handle="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4532] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:22.634044 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4532] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.195/26] IPv6=[] ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" HandleID="k8s-pod-network.e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.634166 containerd[1697]: 2025-12-12 17:22:22.615 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0", GenerateName:"calico-apiserver-878796b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e7780bc-34b6-4688-ae6a-fbd80527fba7", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878796b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"calico-apiserver-878796b8-spr8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5ada3d61ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:22.635167 containerd[1697]: 2025-12-12 17:22:22.615 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.195/32] ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.635167 containerd[1697]: 2025-12-12 17:22:22.615 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5ada3d61ff ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.635167 containerd[1697]: 2025-12-12 17:22:22.617 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.635261 containerd[1697]: 2025-12-12 17:22:22.619 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0", GenerateName:"calico-apiserver-878796b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e7780bc-34b6-4688-ae6a-fbd80527fba7", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878796b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b", Pod:"calico-apiserver-878796b8-spr8v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5ada3d61ff", MAC:"be:2a:1a:b7:4a:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:22.635322 containerd[1697]: 2025-12-12 17:22:22.631 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-spr8v" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--spr8v-eth0" Dec 12 17:22:22.644000 audit[4552]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4552 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:22.644000 audit[4552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=fffffc195cb0 a2=0 a3=ffff98d4efa8 items=0 ppid=4202 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.644000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:22.650559 kubelet[2889]: E1212 17:22:22.650459 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:22.667937 containerd[1697]: time="2025-12-12T17:22:22.667892654Z" level=info msg="connecting to shim e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b" address="unix:///run/containerd/s/6990ead20922a6a2c3648725737dfdb7c7a0406bb3fd219e1073b3d4882e1aca" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:22.693678 systemd[1]: Started cri-containerd-e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b.scope - libcontainer container e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b. Dec 12 17:22:22.708000 audit: BPF prog-id=216 op=LOAD Dec 12 17:22:22.708000 audit: BPF prog-id=217 op=LOAD Dec 12 17:22:22.708000 audit[4573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.708000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:22:22.708000 audit[4573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.709000 audit: BPF prog-id=218 op=LOAD Dec 12 17:22:22.709000 audit[4573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.709000 audit: BPF prog-id=219 op=LOAD Dec 12 17:22:22.709000 audit[4573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.709000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:22:22.709000 audit[4573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.709000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:22:22.709000 audit[4573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.709000 audit: BPF prog-id=220 op=LOAD Dec 12 17:22:22.709000 audit[4573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4562 pid=4573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536636235663034363133323532383138613537376436333533313663 Dec 12 17:22:22.726760 systemd-networkd[1601]: cali6aa9206b29a: Link UP Dec 12 17:22:22.727544 systemd-networkd[1601]: cali6aa9206b29a: Gained carrier Dec 12 17:22:22.743524 containerd[1697]: time="2025-12-12T17:22:22.743480011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-spr8v,Uid:6e7780bc-34b6-4688-ae6a-fbd80527fba7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e6cb5f04613252818a577d635316c3ed2e72e5cdbc334c5634d2cc77e8365a9b\"" Dec 12 17:22:22.745723 containerd[1697]: 2025-12-12 17:22:22.553 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0 csi-node-driver- calico-system 3fd99a4f-5151-4d6d-a968-dc993caff3f6 691 0 2025-12-12 17:21:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 csi-node-driver-79stw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6aa9206b29a [] [] }} ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-" Dec 12 17:22:22.745723 containerd[1697]: 2025-12-12 17:22:22.554 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.745723 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4528] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" HandleID="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Workload="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4528] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" HandleID="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Workload="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"csi-node-driver-79stw", "timestamp":"2025-12-12 17:22:22.579719865 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.579 [INFO][4528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.613 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.689 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.695 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.700 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.703 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746002 containerd[1697]: 2025-12-12 17:22:22.706 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.707 [INFO][4528] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.709 [INFO][4528] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6 Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.714 [INFO][4528] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.720 [INFO][4528] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.196/26] block=192.168.25.192/26 handle="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.720 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.196/26] handle="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.721 [INFO][4528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:22.746184 containerd[1697]: 2025-12-12 17:22:22.721 [INFO][4528] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.196/26] IPv6=[] ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" HandleID="k8s-pod-network.803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Workload="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.746302 containerd[1697]: 2025-12-12 17:22:22.723 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3fd99a4f-5151-4d6d-a968-dc993caff3f6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"csi-node-driver-79stw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aa9206b29a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:22.746349 containerd[1697]: 2025-12-12 17:22:22.723 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.196/32] ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.746349 containerd[1697]: 2025-12-12 17:22:22.723 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aa9206b29a ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.746349 containerd[1697]: 2025-12-12 17:22:22.727 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.746637 containerd[1697]: 2025-12-12 17:22:22.728 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3fd99a4f-5151-4d6d-a968-dc993caff3f6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6", Pod:"csi-node-driver-79stw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aa9206b29a", MAC:"fa:28:93:87:e8:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:22.746729 containerd[1697]: 2025-12-12 17:22:22.740 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" Namespace="calico-system" Pod="csi-node-driver-79stw" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-csi--node--driver--79stw-eth0" Dec 12 17:22:22.749542 containerd[1697]: time="2025-12-12T17:22:22.749449626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:22:22.757000 audit[4605]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:22.757000 audit[4605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23600 a0=3 a1=ffffc7f9a6f0 a2=0 a3=ffffa7e04fa8 items=0 ppid=4202 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.757000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:22.776486 containerd[1697]: time="2025-12-12T17:22:22.776364536Z" level=info msg="connecting to shim 803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6" address="unix:///run/containerd/s/545158f0fd5b81b8c6cc2c8fff9468bf809f6d901884ae9bcdd8985731940e06" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:22.803848 systemd[1]: Started cri-containerd-803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6.scope - libcontainer container 803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6. Dec 12 17:22:22.811000 audit: BPF prog-id=221 op=LOAD Dec 12 17:22:22.811000 audit: BPF prog-id=222 op=LOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=223 op=LOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=224 op=LOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.811000 audit: BPF prog-id=225 op=LOAD Dec 12 17:22:22.811000 audit[4625]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4614 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:22.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336132363331643065326439643562636330323862363334346464 Dec 12 17:22:22.826578 containerd[1697]: time="2025-12-12T17:22:22.826477706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79stw,Uid:3fd99a4f-5151-4d6d-a968-dc993caff3f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"803a2631d0e2d9d5bcc028b6344dd466627bb1606644891b37276f4656d001d6\"" Dec 12 17:22:23.086179 containerd[1697]: time="2025-12-12T17:22:23.085945020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:23.089642 containerd[1697]: time="2025-12-12T17:22:23.089566270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:22:23.089754 containerd[1697]: time="2025-12-12T17:22:23.089607350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:23.090180 kubelet[2889]: E1212 17:22:23.089975 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:23.090180 kubelet[2889]: E1212 17:22:23.090029 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:23.090419 kubelet[2889]: E1212 17:22:23.090341 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrqmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:23.090616 containerd[1697]: time="2025-12-12T17:22:23.090591792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:22:23.091904 kubelet[2889]: E1212 17:22:23.091869 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:23.419982 containerd[1697]: time="2025-12-12T17:22:23.419921528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:23.421180 containerd[1697]: time="2025-12-12T17:22:23.421145571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:22:23.421294 containerd[1697]: time="2025-12-12T17:22:23.421225691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:23.421498 kubelet[2889]: E1212 17:22:23.421454 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:22:23.421999 kubelet[2889]: E1212 17:22:23.421795 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:22:23.421999 kubelet[2889]: E1212 17:22:23.421940 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:23.424042 containerd[1697]: time="2025-12-12T17:22:23.423829298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:22:23.506506 containerd[1697]: time="2025-12-12T17:22:23.506389032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nzjsj,Uid:0bdf7dfd-6bce-4744-a930-376661816277,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:23.506733 containerd[1697]: time="2025-12-12T17:22:23.506389592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mf99p,Uid:0610226f-a574-4695-bdfb-8c6f86d2fa21,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:23.506733 containerd[1697]: time="2025-12-12T17:22:23.506389632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-5d5jh,Uid:51905ae7-f059-4931-8cc7-e32bc90c24e4,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:23.547802 systemd-networkd[1601]: caliccd6a8009c6: Gained IPv6LL Dec 12 17:22:23.639266 systemd-networkd[1601]: cali86e31d0e1dc: Link UP Dec 12 17:22:23.641204 systemd-networkd[1601]: cali86e31d0e1dc: Gained carrier Dec 12 17:22:23.655124 kubelet[2889]: E1212 17:22:23.655078 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:23.656554 containerd[1697]: 2025-12-12 17:22:23.568 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0 goldmane-666569f655- calico-system 0bdf7dfd-6bce-4744-a930-376661816277 810 0 2025-12-12 17:21:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 goldmane-666569f655-nzjsj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali86e31d0e1dc [] [] }} ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-" Dec 12 17:22:23.656554 containerd[1697]: 2025-12-12 17:22:23.568 [INFO][4651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.656554 containerd[1697]: 2025-12-12 17:22:23.595 [INFO][4699] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" HandleID="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Workload="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.595 [INFO][4699] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" HandleID="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Workload="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"goldmane-666569f655-nzjsj", "timestamp":"2025-12-12 17:22:23.595658264 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.595 [INFO][4699] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.596 [INFO][4699] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.596 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.605 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.609 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.614 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.616 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.656995 containerd[1697]: 2025-12-12 17:22:23.618 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.618 [INFO][4699] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.620 [INFO][4699] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730 Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.625 [INFO][4699] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4699] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.197/26] block=192.168.25.192/26 handle="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.197/26] handle="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4699] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:23.657359 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4699] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.197/26] IPv6=[] ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" HandleID="k8s-pod-network.c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Workload="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.657916 containerd[1697]: 2025-12-12 17:22:23.635 [INFO][4651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0bdf7dfd-6bce-4744-a930-376661816277", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"goldmane-666569f655-nzjsj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali86e31d0e1dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.658845 containerd[1697]: 2025-12-12 17:22:23.636 [INFO][4651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.197/32] ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.658845 containerd[1697]: 2025-12-12 17:22:23.636 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86e31d0e1dc ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.658845 containerd[1697]: 2025-12-12 17:22:23.638 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.658932 kubelet[2889]: E1212 17:22:23.658258 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:23.658990 containerd[1697]: 2025-12-12 17:22:23.639 [INFO][4651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0bdf7dfd-6bce-4744-a930-376661816277", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730", Pod:"goldmane-666569f655-nzjsj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali86e31d0e1dc", MAC:"42:32:15:61:41:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.659041 containerd[1697]: 2025-12-12 17:22:23.653 [INFO][4651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" Namespace="calico-system" Pod="goldmane-666569f655-nzjsj" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-goldmane--666569f655--nzjsj-eth0" Dec 12 17:22:23.683000 audit[4733]: NETFILTER_CFG table=filter:130 family=2 entries=48 op=nft_register_chain pid=4733 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:23.683000 audit[4733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26336 a0=3 a1=fffff56f1a80 a2=0 a3=ffff88364fa8 items=0 ppid=4202 pid=4733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.683000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:23.692000 audit[4734]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:23.692000 audit[4734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd5ef5e10 a2=0 a3=1 items=0 ppid=2997 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:23.699295 containerd[1697]: time="2025-12-12T17:22:23.699228373Z" level=info msg="connecting to shim c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730" address="unix:///run/containerd/s/81fd5277db4f8a3c47d746a410fa002ed1ef8687eaa4db09fa1bcec8495d8ee2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:23.699000 audit[4734]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:23.699000 audit[4734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd5ef5e10 a2=0 a3=1 items=0 ppid=2997 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:23.730690 systemd[1]: Started cri-containerd-c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730.scope - libcontainer container c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730. Dec 12 17:22:23.745000 audit: BPF prog-id=226 op=LOAD Dec 12 17:22:23.746000 audit: BPF prog-id=227 op=LOAD Dec 12 17:22:23.746000 audit[4755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.746000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:22:23.746000 audit[4755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.746000 audit: BPF prog-id=228 op=LOAD Dec 12 17:22:23.746000 audit[4755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.746000 audit: BPF prog-id=229 op=LOAD Dec 12 17:22:23.746000 audit[4755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.746000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:22:23.746000 audit[4755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.747000 audit: BPF prog-id=228 op=UNLOAD Dec 12 17:22:23.747000 audit[4755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.747000 audit: BPF prog-id=230 op=LOAD Dec 12 17:22:23.747000 audit[4755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4743 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335363263396237636162316565643837343838386537373966386432 Dec 12 17:22:23.752717 systemd-networkd[1601]: cali8a7fbbc00e2: Link UP Dec 12 17:22:23.752912 systemd-networkd[1601]: cali8a7fbbc00e2: Gained carrier Dec 12 17:22:23.765981 containerd[1697]: time="2025-12-12T17:22:23.765630666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:23.769999 containerd[1697]: time="2025-12-12T17:22:23.769948597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:22:23.770130 containerd[1697]: time="2025-12-12T17:22:23.770001757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:23.770210 kubelet[2889]: E1212 17:22:23.770175 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:22:23.770266 kubelet[2889]: E1212 17:22:23.770228 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:22:23.770375 kubelet[2889]: E1212 17:22:23.770339 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:23.771609 kubelet[2889]: E1212 17:22:23.771547 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:23.775531 containerd[1697]: 2025-12-12 17:22:23.555 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0 coredns-668d6bf9bc- kube-system 0610226f-a574-4695-bdfb-8c6f86d2fa21 812 0 2025-12-12 17:21:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 coredns-668d6bf9bc-mf99p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a7fbbc00e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-" Dec 12 17:22:23.775531 containerd[1697]: 2025-12-12 17:22:23.555 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.775531 containerd[1697]: 2025-12-12 17:22:23.600 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" HandleID="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.601 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" HandleID="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6f30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"coredns-668d6bf9bc-mf99p", "timestamp":"2025-12-12 17:22:23.600767038 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.601 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.634 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.706 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.712 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.718 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.721 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.775860 containerd[1697]: 2025-12-12 17:22:23.724 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.724 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.727 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4 Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.734 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.198/26] block=192.168.25.192/26 handle="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.198/26] handle="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:23.776481 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.198/26] IPv6=[] ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" HandleID="k8s-pod-network.3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.747 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0610226f-a574-4695-bdfb-8c6f86d2fa21", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"coredns-668d6bf9bc-mf99p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a7fbbc00e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.747 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.198/32] ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.747 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a7fbbc00e2 ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.753 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.753 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0610226f-a574-4695-bdfb-8c6f86d2fa21", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4", Pod:"coredns-668d6bf9bc-mf99p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a7fbbc00e2", MAC:"be:52:45:f7:dc:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.777102 containerd[1697]: 2025-12-12 17:22:23.770 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mf99p" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--mf99p-eth0" Dec 12 17:22:23.790076 containerd[1697]: time="2025-12-12T17:22:23.790032209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nzjsj,Uid:0bdf7dfd-6bce-4744-a930-376661816277,Namespace:calico-system,Attempt:0,} returns sandbox id \"c562c9b7cab1eed874888e779f8d20c99f82189dbbda4a4f76eb08a7e1ab3730\"" Dec 12 17:22:23.790000 audit[4793]: NETFILTER_CFG table=filter:133 family=2 entries=56 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:23.790000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27748 a0=3 a1=ffffdbf29b80 a2=0 a3=ffffbe158fa8 items=0 ppid=4202 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.790000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:23.793841 containerd[1697]: time="2025-12-12T17:22:23.793612378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:22:23.811615 containerd[1697]: time="2025-12-12T17:22:23.811478025Z" level=info msg="connecting to shim 3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4" address="unix:///run/containerd/s/6ffe71e9e47cc86abad61f558ec22d236ce854118d8fcc91f8b9f3410d0571b2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:23.841836 systemd[1]: Started cri-containerd-3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4.scope - libcontainer container 3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4. Dec 12 17:22:23.854385 systemd-networkd[1601]: calid946d73cc33: Link UP Dec 12 17:22:23.855696 systemd-networkd[1601]: calid946d73cc33: Gained carrier Dec 12 17:22:23.861000 audit: BPF prog-id=231 op=LOAD Dec 12 17:22:23.861000 audit: BPF prog-id=232 op=LOAD Dec 12 17:22:23.861000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.861000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:22:23.861000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.862000 audit: BPF prog-id=233 op=LOAD Dec 12 17:22:23.862000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.862000 audit: BPF prog-id=234 op=LOAD Dec 12 17:22:23.862000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.862000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:22:23.862000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.862000 audit: BPF prog-id=233 op=UNLOAD Dec 12 17:22:23.862000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.862000 audit: BPF prog-id=235 op=LOAD Dec 12 17:22:23.862000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4802 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613364656238343135306634396164636261323864323363663434 Dec 12 17:22:23.866981 systemd-networkd[1601]: calic5ada3d61ff: Gained IPv6LL Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.576 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0 calico-apiserver-878796b8- calico-apiserver 51905ae7-f059-4931-8cc7-e32bc90c24e4 808 0 2025-12-12 17:21:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:878796b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 calico-apiserver-878796b8-5d5jh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid946d73cc33 [] [] }} ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.576 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.601 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" HandleID="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.601 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" HandleID="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004261e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-8-acd31a5336", "pod":"calico-apiserver-878796b8-5d5jh", "timestamp":"2025-12-12 17:22:23.601428679 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.601 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.744 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.807 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.816 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.827 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.829 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.833 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.833 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.837 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.841 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.849 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.199/26] block=192.168.25.192/26 handle="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.849 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.199/26] handle="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.849 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:23.872172 containerd[1697]: 2025-12-12 17:22:23.849 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.199/26] IPv6=[] ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" HandleID="k8s-pod-network.176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Workload="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.851 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0", GenerateName:"calico-apiserver-878796b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"51905ae7-f059-4931-8cc7-e32bc90c24e4", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878796b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"calico-apiserver-878796b8-5d5jh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid946d73cc33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.851 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.199/32] ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.851 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid946d73cc33 ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.853 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.857 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0", GenerateName:"calico-apiserver-878796b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"51905ae7-f059-4931-8cc7-e32bc90c24e4", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"878796b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a", Pod:"calico-apiserver-878796b8-5d5jh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid946d73cc33", MAC:"16:ef:6c:4b:30:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:23.872708 containerd[1697]: 2025-12-12 17:22:23.869 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" Namespace="calico-apiserver" Pod="calico-apiserver-878796b8-5d5jh" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-calico--apiserver--878796b8--5d5jh-eth0" Dec 12 17:22:23.890000 audit[4847]: NETFILTER_CFG table=filter:134 family=2 entries=45 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:23.890000 audit[4847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24216 a0=3 a1=ffffdbb3be20 a2=0 a3=ffffa5489fa8 items=0 ppid=4202 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.890000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:23.894818 containerd[1697]: time="2025-12-12T17:22:23.894783001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mf99p,Uid:0610226f-a574-4695-bdfb-8c6f86d2fa21,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4\"" Dec 12 17:22:23.899135 containerd[1697]: time="2025-12-12T17:22:23.898564891Z" level=info msg="CreateContainer within sandbox \"3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:22:23.907686 containerd[1697]: time="2025-12-12T17:22:23.907627035Z" level=info msg="connecting to shim 176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a" address="unix:///run/containerd/s/b35352a4d13565a345334b35f1b6bb6b3651074b69ba3bd94032b424f93aa9f1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:23.912208 containerd[1697]: time="2025-12-12T17:22:23.912154446Z" level=info msg="Container 7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:23.923876 containerd[1697]: time="2025-12-12T17:22:23.923826197Z" level=info msg="CreateContainer within sandbox \"3ea3deb84150f49adcba28d23cf4447a58239ea0e29ae841397124fd06e568c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a\"" Dec 12 17:22:23.927520 containerd[1697]: time="2025-12-12T17:22:23.927477966Z" level=info msg="StartContainer for \"7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a\"" Dec 12 17:22:23.929198 containerd[1697]: time="2025-12-12T17:22:23.929159731Z" level=info msg="connecting to shim 7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a" address="unix:///run/containerd/s/6ffe71e9e47cc86abad61f558ec22d236ce854118d8fcc91f8b9f3410d0571b2" protocol=ttrpc version=3 Dec 12 17:22:23.930578 systemd-networkd[1601]: cali6aa9206b29a: Gained IPv6LL Dec 12 17:22:23.933658 systemd[1]: Started cri-containerd-176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a.scope - libcontainer container 176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a. Dec 12 17:22:23.945790 systemd[1]: Started cri-containerd-7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a.scope - libcontainer container 7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a. Dec 12 17:22:23.947000 audit: BPF prog-id=236 op=LOAD Dec 12 17:22:23.948000 audit: BPF prog-id=237 op=LOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=238 op=LOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=239 op=LOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=239 op=UNLOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=238 op=UNLOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.948000 audit: BPF prog-id=240 op=LOAD Dec 12 17:22:23.948000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137366264396237633431613636323431616633396131383535356532 Dec 12 17:22:23.959000 audit: BPF prog-id=241 op=LOAD Dec 12 17:22:23.960000 audit: BPF prog-id=242 op=LOAD Dec 12 17:22:23.960000 audit[4880]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.960000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:22:23.960000 audit[4880]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.960000 audit: BPF prog-id=243 op=LOAD Dec 12 17:22:23.960000 audit[4880]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.960000 audit: BPF prog-id=244 op=LOAD Dec 12 17:22:23.960000 audit[4880]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.962000 audit: BPF prog-id=244 op=UNLOAD Dec 12 17:22:23.962000 audit[4880]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.962000 audit: BPF prog-id=243 op=UNLOAD Dec 12 17:22:23.962000 audit[4880]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.962000 audit: BPF prog-id=245 op=LOAD Dec 12 17:22:23.962000 audit[4880]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4802 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:23.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643436646161653835376433623361326465656562303962663039 Dec 12 17:22:23.979698 containerd[1697]: time="2025-12-12T17:22:23.979660982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-878796b8-5d5jh,Uid:51905ae7-f059-4931-8cc7-e32bc90c24e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"176bd9b7c41a66241af39a18555e24e38cc42aca27b04c9c63505c0c3b82916a\"" Dec 12 17:22:23.986791 containerd[1697]: time="2025-12-12T17:22:23.986742120Z" level=info msg="StartContainer for \"7cd46daae857d3b3a2deeeb09bf09991ac097a59d540f8b898c193e06948f68a\" returns successfully" Dec 12 17:22:24.148352 containerd[1697]: time="2025-12-12T17:22:24.148310980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:24.149946 containerd[1697]: time="2025-12-12T17:22:24.149887344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:22:24.150014 containerd[1697]: time="2025-12-12T17:22:24.149981624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:24.150186 kubelet[2889]: E1212 17:22:24.150145 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:22:24.150263 kubelet[2889]: E1212 17:22:24.150196 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:22:24.150564 kubelet[2889]: E1212 17:22:24.150460 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hffrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:24.151022 containerd[1697]: time="2025-12-12T17:22:24.150833226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:22:24.152221 kubelet[2889]: E1212 17:22:24.152187 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:24.483776 containerd[1697]: time="2025-12-12T17:22:24.483718371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:24.485336 containerd[1697]: time="2025-12-12T17:22:24.485288735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:22:24.485432 containerd[1697]: time="2025-12-12T17:22:24.485357135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:24.485588 kubelet[2889]: E1212 17:22:24.485523 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:24.485588 kubelet[2889]: E1212 17:22:24.485580 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:24.485881 kubelet[2889]: E1212 17:22:24.485728 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xpg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:24.487118 kubelet[2889]: E1212 17:22:24.487065 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:24.506285 containerd[1697]: time="2025-12-12T17:22:24.506217630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk5m8,Uid:2500d734-eefc-4d8d-acd0-0691f355e183,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:24.608567 systemd-networkd[1601]: calif63ea2102b4: Link UP Dec 12 17:22:24.609358 systemd-networkd[1601]: calif63ea2102b4: Gained carrier Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.541 [INFO][4927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0 coredns-668d6bf9bc- kube-system 2500d734-eefc-4d8d-acd0-0691f355e183 806 0 2025-12-12 17:21:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-8-acd31a5336 coredns-668d6bf9bc-dk5m8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif63ea2102b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.541 [INFO][4927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.567 [INFO][4943] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" HandleID="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.568 [INFO][4943] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" HandleID="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000354120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-8-acd31a5336", "pod":"coredns-668d6bf9bc-dk5m8", "timestamp":"2025-12-12 17:22:24.56795271 +0000 UTC"}, Hostname:"ci-4515-1-0-8-acd31a5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.568 [INFO][4943] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.568 [INFO][4943] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.568 [INFO][4943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-8-acd31a5336' Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.577 [INFO][4943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.582 [INFO][4943] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.586 [INFO][4943] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.589 [INFO][4943] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.591 [INFO][4943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.591 [INFO][4943] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.593 [INFO][4943] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0 Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.597 [INFO][4943] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.603 [INFO][4943] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.200/26] block=192.168.25.192/26 handle="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.604 [INFO][4943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.200/26] handle="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" host="ci-4515-1-0-8-acd31a5336" Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.604 [INFO][4943] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:24.624714 containerd[1697]: 2025-12-12 17:22:24.604 [INFO][4943] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.200/26] IPv6=[] ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" HandleID="k8s-pod-network.c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Workload="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.606 [INFO][4927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2500d734-eefc-4d8d-acd0-0691f355e183", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"", Pod:"coredns-668d6bf9bc-dk5m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ea2102b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.606 [INFO][4927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.200/32] ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.606 [INFO][4927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif63ea2102b4 ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.609 [INFO][4927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.609 [INFO][4927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2500d734-eefc-4d8d-acd0-0691f355e183", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-8-acd31a5336", ContainerID:"c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0", Pod:"coredns-668d6bf9bc-dk5m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ea2102b4", MAC:"2a:c1:78:1d:7c:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:24.625451 containerd[1697]: 2025-12-12 17:22:24.621 [INFO][4927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dk5m8" WorkloadEndpoint="ci--4515--1--0--8--acd31a5336-k8s-coredns--668d6bf9bc--dk5m8-eth0" Dec 12 17:22:24.634000 audit[4960]: NETFILTER_CFG table=filter:135 family=2 entries=44 op=nft_register_chain pid=4960 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:22:24.634000 audit[4960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21484 a0=3 a1=fffffea627c0 a2=0 a3=ffffafc5efa8 items=0 ppid=4202 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.634000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:22:24.648205 containerd[1697]: time="2025-12-12T17:22:24.648011038Z" level=info msg="connecting to shim c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0" address="unix:///run/containerd/s/1a8844dc4e7412693d949ca7bee6369097f144d1998d227d0ad82bc45f4f9a81" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:24.661962 kubelet[2889]: E1212 17:22:24.661871 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:24.671208 kubelet[2889]: E1212 17:22:24.671145 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:24.671375 kubelet[2889]: E1212 17:22:24.671218 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:24.672156 kubelet[2889]: E1212 17:22:24.672121 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:24.686649 systemd[1]: Started cri-containerd-c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0.scope - libcontainer container c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0. Dec 12 17:22:24.688000 audit[4993]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:24.688000 audit[4993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2346990 a2=0 a3=1 items=0 ppid=2997 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:24.693000 audit[4993]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:24.693000 audit[4993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc2346990 a2=0 a3=1 items=0 ppid=2997 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:24.707000 audit: BPF prog-id=246 op=LOAD Dec 12 17:22:24.709000 audit: BPF prog-id=247 op=LOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=248 op=LOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=249 op=LOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=249 op=UNLOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=248 op=UNLOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.709000 audit: BPF prog-id=250 op=LOAD Dec 12 17:22:24.709000 audit[4980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4970 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383966636434643938353961653937623763643032633935643931 Dec 12 17:22:24.727000 audit[5002]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:24.727000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd36e5340 a2=0 a3=1 items=0 ppid=2997 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:24.735467 kubelet[2889]: I1212 17:22:24.735183 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mf99p" podStartSLOduration=47.735168264 podStartE2EDuration="47.735168264s" podCreationTimestamp="2025-12-12 17:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:24.709830198 +0000 UTC m=+53.432878059" watchObservedRunningTime="2025-12-12 17:22:24.735168264 +0000 UTC m=+53.458216125" Dec 12 17:22:24.735000 audit[5002]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:24.735000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd36e5340 a2=0 a3=1 items=0 ppid=2997 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:24.746508 containerd[1697]: time="2025-12-12T17:22:24.746447334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk5m8,Uid:2500d734-eefc-4d8d-acd0-0691f355e183,Namespace:kube-system,Attempt:0,} returns sandbox id \"c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0\"" Dec 12 17:22:24.751285 containerd[1697]: time="2025-12-12T17:22:24.751220466Z" level=info msg="CreateContainer within sandbox \"c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:22:24.767259 containerd[1697]: time="2025-12-12T17:22:24.766549626Z" level=info msg="Container 2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:24.767098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555817005.mount: Deactivated successfully. Dec 12 17:22:24.774245 containerd[1697]: time="2025-12-12T17:22:24.774202206Z" level=info msg="CreateContainer within sandbox \"c689fcd4d9859ae97b7cd02c95d91e3070f541ae217b3b525b3fa8ded18040a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d\"" Dec 12 17:22:24.775219 containerd[1697]: time="2025-12-12T17:22:24.775190448Z" level=info msg="StartContainer for \"2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d\"" Dec 12 17:22:24.776602 containerd[1697]: time="2025-12-12T17:22:24.776555612Z" level=info msg="connecting to shim 2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d" address="unix:///run/containerd/s/1a8844dc4e7412693d949ca7bee6369097f144d1998d227d0ad82bc45f4f9a81" protocol=ttrpc version=3 Dec 12 17:22:24.797689 systemd[1]: Started cri-containerd-2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d.scope - libcontainer container 2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d. Dec 12 17:22:24.807000 audit: BPF prog-id=251 op=LOAD Dec 12 17:22:24.807000 audit: BPF prog-id=252 op=LOAD Dec 12 17:22:24.807000 audit[5011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=253 op=LOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=254 op=LOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=254 op=UNLOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=253 op=UNLOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.808000 audit: BPF prog-id=255 op=LOAD Dec 12 17:22:24.808000 audit[5011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4970 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:24.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261326661633032633436653464343136386633333166306137313336 Dec 12 17:22:24.825298 containerd[1697]: time="2025-12-12T17:22:24.825219818Z" level=info msg="StartContainer for \"2a2fac02c46e4d4168f331f0a7136a61a8c5d8b48ae9b169bf739fbadb6e764d\" returns successfully" Dec 12 17:22:25.083600 systemd-networkd[1601]: calid946d73cc33: Gained IPv6LL Dec 12 17:22:25.146743 systemd-networkd[1601]: cali8a7fbbc00e2: Gained IPv6LL Dec 12 17:22:25.274690 systemd-networkd[1601]: cali86e31d0e1dc: Gained IPv6LL Dec 12 17:22:25.675157 kubelet[2889]: E1212 17:22:25.675106 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:25.675157 kubelet[2889]: E1212 17:22:25.675242 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:25.689614 kubelet[2889]: I1212 17:22:25.689527 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dk5m8" podStartSLOduration=48.689508463 podStartE2EDuration="48.689508463s" podCreationTimestamp="2025-12-12 17:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:25.689158702 +0000 UTC m=+54.412206563" watchObservedRunningTime="2025-12-12 17:22:25.689508463 +0000 UTC m=+54.412556324" Dec 12 17:22:25.698000 audit[5051]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:25.703274 kernel: kauditd_printk_skb: 233 callbacks suppressed Dec 12 17:22:25.703378 kernel: audit: type=1325 audit(1765560145.698:741): table=filter:140 family=2 entries=14 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:25.698000 audit[5051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef6a2c10 a2=0 a3=1 items=0 ppid=2997 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:25.709182 kernel: audit: type=1300 audit(1765560145.698:741): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef6a2c10 a2=0 a3=1 items=0 ppid=2997 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:25.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:25.712426 kernel: audit: type=1327 audit(1765560145.698:741): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:25.713000 audit[5051]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:25.713000 audit[5051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffef6a2c10 a2=0 a3=1 items=0 ppid=2997 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:25.720505 kernel: audit: type=1325 audit(1765560145.713:742): table=nat:141 family=2 entries=44 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:25.720578 kernel: audit: type=1300 audit(1765560145.713:742): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffef6a2c10 a2=0 a3=1 items=0 ppid=2997 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:25.720601 kernel: audit: type=1327 audit(1765560145.713:742): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:25.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:25.723006 systemd-networkd[1601]: calif63ea2102b4: Gained IPv6LL Dec 12 17:22:26.736000 audit[5054]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:26.736000 audit[5054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe373e310 a2=0 a3=1 items=0 ppid=2997 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:26.743956 kernel: audit: type=1325 audit(1765560146.736:743): table=filter:142 family=2 entries=14 op=nft_register_rule pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:26.744166 kernel: audit: type=1300 audit(1765560146.736:743): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe373e310 a2=0 a3=1 items=0 ppid=2997 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:26.744201 kernel: audit: type=1327 audit(1765560146.736:743): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:26.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:26.756000 audit[5054]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:26.756000 audit[5054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe373e310 a2=0 a3=1 items=0 ppid=2997 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:26.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:26.761433 kernel: audit: type=1325 audit(1765560146.756:744): table=nat:143 family=2 entries=56 op=nft_register_chain pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:32.507053 containerd[1697]: time="2025-12-12T17:22:32.507004373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:22:32.854747 containerd[1697]: time="2025-12-12T17:22:32.854655436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:32.856231 containerd[1697]: time="2025-12-12T17:22:32.856177880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:22:32.856318 containerd[1697]: time="2025-12-12T17:22:32.856280760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:32.856534 kubelet[2889]: E1212 17:22:32.856483 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:32.856819 kubelet[2889]: E1212 17:22:32.856549 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:32.856819 kubelet[2889]: E1212 17:22:32.856661 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:036088a0863247c4915e9fb15ee70601,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:32.858757 containerd[1697]: time="2025-12-12T17:22:32.858737486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:22:33.177601 containerd[1697]: time="2025-12-12T17:22:33.177469274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:33.178684 containerd[1697]: time="2025-12-12T17:22:33.178606997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:22:33.178684 containerd[1697]: time="2025-12-12T17:22:33.178653317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:33.179159 kubelet[2889]: E1212 17:22:33.178929 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:33.179159 kubelet[2889]: E1212 17:22:33.178999 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:33.179159 kubelet[2889]: E1212 17:22:33.179108 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:33.180323 kubelet[2889]: E1212 17:22:33.180259 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:22:35.508477 containerd[1697]: time="2025-12-12T17:22:35.508257169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:22:35.853548 containerd[1697]: time="2025-12-12T17:22:35.853463465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:35.854935 containerd[1697]: time="2025-12-12T17:22:35.854895309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:22:35.855013 containerd[1697]: time="2025-12-12T17:22:35.854935509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:35.855173 kubelet[2889]: E1212 17:22:35.855118 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:22:35.855173 kubelet[2889]: E1212 17:22:35.855166 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:22:35.855500 kubelet[2889]: E1212 17:22:35.855318 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24jgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:35.856763 kubelet[2889]: E1212 17:22:35.856715 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:36.507027 containerd[1697]: time="2025-12-12T17:22:36.506977883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:22:36.858173 containerd[1697]: time="2025-12-12T17:22:36.857967715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:36.859594 containerd[1697]: time="2025-12-12T17:22:36.859529479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:22:36.859667 containerd[1697]: time="2025-12-12T17:22:36.859550399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:36.859839 kubelet[2889]: E1212 17:22:36.859758 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:22:36.860067 kubelet[2889]: E1212 17:22:36.859861 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:22:36.860067 kubelet[2889]: E1212 17:22:36.859972 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:36.862211 containerd[1697]: time="2025-12-12T17:22:36.862181326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:22:37.195694 containerd[1697]: time="2025-12-12T17:22:37.195445831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:37.197117 containerd[1697]: time="2025-12-12T17:22:37.197011795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:22:37.197117 containerd[1697]: time="2025-12-12T17:22:37.197057156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:37.197439 kubelet[2889]: E1212 17:22:37.197382 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:22:37.197492 kubelet[2889]: E1212 17:22:37.197451 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:22:37.197609 kubelet[2889]: E1212 17:22:37.197557 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:37.198833 kubelet[2889]: E1212 17:22:37.198783 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:37.509728 containerd[1697]: time="2025-12-12T17:22:37.509387687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:22:37.851333 containerd[1697]: time="2025-12-12T17:22:37.851278575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:37.852840 containerd[1697]: time="2025-12-12T17:22:37.852805939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:22:37.852907 containerd[1697]: time="2025-12-12T17:22:37.852880979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:37.853097 kubelet[2889]: E1212 17:22:37.853049 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:37.853170 kubelet[2889]: E1212 17:22:37.853098 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:37.853562 kubelet[2889]: E1212 17:22:37.853226 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xpg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:37.854494 kubelet[2889]: E1212 17:22:37.854457 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:38.506935 containerd[1697]: time="2025-12-12T17:22:38.506861038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:22:38.830167 containerd[1697]: time="2025-12-12T17:22:38.829914117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:38.831803 containerd[1697]: time="2025-12-12T17:22:38.831645562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:22:38.831803 containerd[1697]: time="2025-12-12T17:22:38.831747122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:38.831948 kubelet[2889]: E1212 17:22:38.831912 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:38.832193 kubelet[2889]: E1212 17:22:38.831957 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:22:38.832193 kubelet[2889]: E1212 17:22:38.832072 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrqmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:38.833488 kubelet[2889]: E1212 17:22:38.833454 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:40.507664 containerd[1697]: time="2025-12-12T17:22:40.507590875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:22:40.834194 containerd[1697]: time="2025-12-12T17:22:40.834058483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:40.835495 containerd[1697]: time="2025-12-12T17:22:40.835386367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:22:40.835577 containerd[1697]: time="2025-12-12T17:22:40.835517087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:40.835830 kubelet[2889]: E1212 17:22:40.835755 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:22:40.835830 kubelet[2889]: E1212 17:22:40.835828 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:22:40.836199 kubelet[2889]: E1212 17:22:40.835953 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hffrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:40.837370 kubelet[2889]: E1212 17:22:40.837316 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:44.507577 kubelet[2889]: E1212 17:22:44.507517 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:22:47.506521 kubelet[2889]: E1212 17:22:47.506392 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:22:48.507693 kubelet[2889]: E1212 17:22:48.507612 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:22:51.512252 kubelet[2889]: E1212 17:22:51.512034 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:22:51.513960 kubelet[2889]: E1212 17:22:51.513908 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:22:53.507418 kubelet[2889]: E1212 17:22:53.506960 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:22:55.510573 containerd[1697]: time="2025-12-12T17:22:55.510530567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:22:55.862325 containerd[1697]: time="2025-12-12T17:22:55.862250641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:55.863920 containerd[1697]: time="2025-12-12T17:22:55.863883965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:22:55.864001 containerd[1697]: time="2025-12-12T17:22:55.863967365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:55.864168 kubelet[2889]: E1212 17:22:55.864127 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:55.864503 kubelet[2889]: E1212 17:22:55.864180 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:55.864503 kubelet[2889]: E1212 17:22:55.864285 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:036088a0863247c4915e9fb15ee70601,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:55.866170 containerd[1697]: time="2025-12-12T17:22:55.866125091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:22:56.192170 containerd[1697]: time="2025-12-12T17:22:56.191920777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:56.194260 containerd[1697]: time="2025-12-12T17:22:56.194202623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:22:56.194349 containerd[1697]: time="2025-12-12T17:22:56.194238303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:56.194558 kubelet[2889]: E1212 17:22:56.194492 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:56.194558 kubelet[2889]: E1212 17:22:56.194550 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:56.194765 kubelet[2889]: E1212 17:22:56.194703 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:56.196148 kubelet[2889]: E1212 17:22:56.196087 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:00.507962 containerd[1697]: time="2025-12-12T17:23:00.507731748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:23:00.877725 containerd[1697]: time="2025-12-12T17:23:00.877610829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:00.879092 containerd[1697]: time="2025-12-12T17:23:00.879057633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:23:00.879159 containerd[1697]: time="2025-12-12T17:23:00.879104553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:00.879330 kubelet[2889]: E1212 17:23:00.879290 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:00.879659 kubelet[2889]: E1212 17:23:00.879343 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:00.879659 kubelet[2889]: E1212 17:23:00.879496 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24jgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:00.881059 kubelet[2889]: E1212 17:23:00.881020 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:23:02.508935 containerd[1697]: time="2025-12-12T17:23:02.508888786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:02.839501 containerd[1697]: time="2025-12-12T17:23:02.839464405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:02.840813 containerd[1697]: time="2025-12-12T17:23:02.840776768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:02.840899 containerd[1697]: time="2025-12-12T17:23:02.840864169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:02.841068 kubelet[2889]: E1212 17:23:02.841026 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:02.841375 kubelet[2889]: E1212 17:23:02.841083 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:02.841375 kubelet[2889]: E1212 17:23:02.841203 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xpg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:02.842578 kubelet[2889]: E1212 17:23:02.842528 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:23:04.509560 containerd[1697]: time="2025-12-12T17:23:04.509009902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:23:04.865298 containerd[1697]: time="2025-12-12T17:23:04.865203227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:04.866406 containerd[1697]: time="2025-12-12T17:23:04.866356830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:23:04.867229 containerd[1697]: time="2025-12-12T17:23:04.866448910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:04.867291 kubelet[2889]: E1212 17:23:04.866588 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:04.867291 kubelet[2889]: E1212 17:23:04.866638 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:04.867291 kubelet[2889]: E1212 17:23:04.866840 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:04.867721 containerd[1697]: time="2025-12-12T17:23:04.867412433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:05.203331 containerd[1697]: time="2025-12-12T17:23:05.203198665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:05.205677 containerd[1697]: time="2025-12-12T17:23:05.205626951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:05.205804 containerd[1697]: time="2025-12-12T17:23:05.205726352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:05.206027 kubelet[2889]: E1212 17:23:05.205988 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:05.206121 kubelet[2889]: E1212 17:23:05.206096 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:05.206495 containerd[1697]: time="2025-12-12T17:23:05.206424874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:23:05.206684 kubelet[2889]: E1212 17:23:05.206593 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrqmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:05.209127 kubelet[2889]: E1212 17:23:05.208282 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:23:05.545650 containerd[1697]: time="2025-12-12T17:23:05.545534194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:05.546825 containerd[1697]: time="2025-12-12T17:23:05.546775598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:23:05.546996 containerd[1697]: time="2025-12-12T17:23:05.546865038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:05.547067 kubelet[2889]: E1212 17:23:05.547020 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:05.547116 kubelet[2889]: E1212 17:23:05.547078 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:05.547231 kubelet[2889]: E1212 17:23:05.547192 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:05.548585 kubelet[2889]: E1212 17:23:05.548533 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:23:08.507604 containerd[1697]: time="2025-12-12T17:23:08.507565089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:23:08.853479 containerd[1697]: time="2025-12-12T17:23:08.853396387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:08.854873 containerd[1697]: time="2025-12-12T17:23:08.854834431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:23:08.855378 containerd[1697]: time="2025-12-12T17:23:08.854913311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:08.855462 kubelet[2889]: E1212 17:23:08.855055 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:08.855462 kubelet[2889]: E1212 17:23:08.855101 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:08.855462 kubelet[2889]: E1212 17:23:08.855240 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hffrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:08.856757 kubelet[2889]: E1212 17:23:08.856704 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:23:10.508249 kubelet[2889]: E1212 17:23:10.508199 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:13.506990 kubelet[2889]: E1212 17:23:13.506884 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:23:15.507503 kubelet[2889]: E1212 17:23:15.507392 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:23:16.507715 kubelet[2889]: E1212 17:23:16.507668 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:23:20.507230 kubelet[2889]: E1212 17:23:20.506884 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:23:20.507860 kubelet[2889]: E1212 17:23:20.507731 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:23:23.508392 kubelet[2889]: E1212 17:23:23.508298 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:26.507619 kubelet[2889]: E1212 17:23:26.507570 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:23:27.507148 kubelet[2889]: E1212 17:23:27.507080 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:23:28.507071 kubelet[2889]: E1212 17:23:28.507004 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:23:34.507692 kubelet[2889]: E1212 17:23:34.507593 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:23:34.507692 kubelet[2889]: E1212 17:23:34.507647 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:23:35.509067 kubelet[2889]: E1212 17:23:35.508686 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:39.507543 kubelet[2889]: E1212 17:23:39.507473 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:23:39.508345 kubelet[2889]: E1212 17:23:39.507979 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:23:40.506751 kubelet[2889]: E1212 17:23:40.506693 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:23:46.509041 containerd[1697]: time="2025-12-12T17:23:46.508847122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:23:46.862720 containerd[1697]: time="2025-12-12T17:23:46.862614921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:46.864058 containerd[1697]: time="2025-12-12T17:23:46.863999005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:23:46.864133 containerd[1697]: time="2025-12-12T17:23:46.864007085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:46.864287 kubelet[2889]: E1212 17:23:46.864221 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:46.864287 kubelet[2889]: E1212 17:23:46.864283 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:46.864608 kubelet[2889]: E1212 17:23:46.864390 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:036088a0863247c4915e9fb15ee70601,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:46.866647 containerd[1697]: time="2025-12-12T17:23:46.866616131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:23:47.207873 containerd[1697]: time="2025-12-12T17:23:47.207633578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:47.209459 containerd[1697]: time="2025-12-12T17:23:47.209382742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:23:47.209577 containerd[1697]: time="2025-12-12T17:23:47.209415662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:47.209679 kubelet[2889]: E1212 17:23:47.209619 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:47.209756 kubelet[2889]: E1212 17:23:47.209677 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:47.209865 kubelet[2889]: E1212 17:23:47.209789 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:47.211098 kubelet[2889]: E1212 17:23:47.211034 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:47.507674 kubelet[2889]: E1212 17:23:47.507319 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:23:48.507485 containerd[1697]: time="2025-12-12T17:23:48.507416555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:23:48.830351 containerd[1697]: time="2025-12-12T17:23:48.830065433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:48.831546 containerd[1697]: time="2025-12-12T17:23:48.831444116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:23:48.831546 containerd[1697]: time="2025-12-12T17:23:48.831507916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:48.831714 kubelet[2889]: E1212 17:23:48.831665 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:48.832041 kubelet[2889]: E1212 17:23:48.831714 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:48.832041 kubelet[2889]: E1212 17:23:48.831831 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:48.834150 containerd[1697]: time="2025-12-12T17:23:48.834119843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:23:49.156229 containerd[1697]: time="2025-12-12T17:23:49.156170400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:49.157805 containerd[1697]: time="2025-12-12T17:23:49.157738684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:23:49.157880 containerd[1697]: time="2025-12-12T17:23:49.157836124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:49.158220 kubelet[2889]: E1212 17:23:49.158005 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:49.158220 kubelet[2889]: E1212 17:23:49.158062 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:49.158220 kubelet[2889]: E1212 17:23:49.158170 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:49.159539 kubelet[2889]: E1212 17:23:49.159467 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:23:51.510272 containerd[1697]: time="2025-12-12T17:23:51.510188395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:51.855292 containerd[1697]: time="2025-12-12T17:23:51.855218171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:51.857038 containerd[1697]: time="2025-12-12T17:23:51.856972175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:51.857220 containerd[1697]: time="2025-12-12T17:23:51.857008376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:51.857271 kubelet[2889]: E1212 17:23:51.857229 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:51.857728 kubelet[2889]: E1212 17:23:51.857293 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:51.857728 kubelet[2889]: E1212 17:23:51.857469 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xpg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:51.858695 kubelet[2889]: E1212 17:23:51.858660 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:23:53.220971 systemd[1]: Started sshd@7-10.0.6.252:22-139.178.89.65:55922.service - OpenSSH per-connection server daemon (139.178.89.65:55922). Dec 12 17:23:53.225788 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 12 17:23:53.225830 kernel: audit: type=1130 audit(1765560233.220:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.6.252:22-139.178.89.65:55922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:53.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.6.252:22-139.178.89.65:55922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:53.513676 containerd[1697]: time="2025-12-12T17:23:53.513493198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:23:53.843306 containerd[1697]: time="2025-12-12T17:23:53.843235335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:53.844928 containerd[1697]: time="2025-12-12T17:23:53.844853219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:23:53.844928 containerd[1697]: time="2025-12-12T17:23:53.844886659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:53.845105 kubelet[2889]: E1212 17:23:53.845063 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:53.845663 kubelet[2889]: E1212 17:23:53.845110 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:53.845663 kubelet[2889]: E1212 17:23:53.845382 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24jgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:53.846335 containerd[1697]: time="2025-12-12T17:23:53.846111102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:53.847344 kubelet[2889]: E1212 17:23:53.847301 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:23:54.081160 sshd[5199]: Accepted publickey for core from 139.178.89.65 port 55922 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:23:54.080000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.086476 kernel: audit: type=1101 audit(1765560234.080:746): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.086583 kernel: audit: type=1103 audit(1765560234.085:747): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.085000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.088012 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:54.092253 kernel: audit: type=1006 audit(1765560234.085:748): pid=5199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 17:23:54.085000 audit[5199]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8e67980 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:54.095927 kernel: audit: type=1300 audit(1765560234.085:748): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8e67980 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:54.085000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:54.097566 kernel: audit: type=1327 audit(1765560234.085:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:54.099667 systemd-logind[1663]: New session 8 of user core. Dec 12 17:23:54.106613 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:23:54.108000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.112000 audit[5202]: CRED_ACQ pid=5202 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.116378 kernel: audit: type=1105 audit(1765560234.108:749): pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.116489 kernel: audit: type=1103 audit(1765560234.112:750): pid=5202 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.196752 containerd[1697]: time="2025-12-12T17:23:54.196556213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:54.204092 containerd[1697]: time="2025-12-12T17:23:54.204029472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:54.204240 containerd[1697]: time="2025-12-12T17:23:54.204094432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:54.204571 kubelet[2889]: E1212 17:23:54.204264 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:54.204571 kubelet[2889]: E1212 17:23:54.204341 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:54.205883 kubelet[2889]: E1212 17:23:54.205744 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrqmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:54.207069 kubelet[2889]: E1212 17:23:54.207003 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:23:54.629498 sshd[5202]: Connection closed by 139.178.89.65 port 55922 Dec 12 17:23:54.629349 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:54.630000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.635213 systemd[1]: sshd@7-10.0.6.252:22-139.178.89.65:55922.service: Deactivated successfully. Dec 12 17:23:54.630000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.638509 kernel: audit: type=1106 audit(1765560234.630:751): pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.638700 kernel: audit: type=1104 audit(1765560234.630:752): pid=5199 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:23:54.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.6.252:22-139.178.89.65:55922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:54.640069 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:23:54.641116 systemd-logind[1663]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:23:54.643018 systemd-logind[1663]: Removed session 8. Dec 12 17:23:58.508210 kubelet[2889]: E1212 17:23:58.508121 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:23:59.800330 systemd[1]: Started sshd@8-10.0.6.252:22-139.178.89.65:55924.service - OpenSSH per-connection server daemon (139.178.89.65:55924). Dec 12 17:23:59.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.252:22-139.178.89.65:55924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:59.804372 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:23:59.804463 kernel: audit: type=1130 audit(1765560239.800:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.252:22-139.178.89.65:55924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:00.630541 sshd[5231]: Accepted publickey for core from 139.178.89.65 port 55924 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:00.629000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.632079 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:00.630000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.638009 systemd-logind[1663]: New session 9 of user core. Dec 12 17:24:00.638628 kernel: audit: type=1101 audit(1765560240.629:755): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.638683 kernel: audit: type=1103 audit(1765560240.630:756): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.638708 kernel: audit: type=1006 audit(1765560240.630:757): pid=5231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:24:00.630000 audit[5231]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc292da50 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:00.641657 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:24:00.643786 kernel: audit: type=1300 audit(1765560240.630:757): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc292da50 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:00.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:00.645140 kernel: audit: type=1327 audit(1765560240.630:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:00.643000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.648926 kernel: audit: type=1105 audit(1765560240.643:758): pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.649000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:00.653454 kernel: audit: type=1103 audit(1765560240.649:759): pid=5234 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:01.209626 sshd[5234]: Connection closed by 139.178.89.65 port 55924 Dec 12 17:24:01.210011 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:01.210000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:01.213633 systemd-logind[1663]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:24:01.210000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:01.215495 systemd[1]: sshd@8-10.0.6.252:22-139.178.89.65:55924.service: Deactivated successfully. Dec 12 17:24:01.217878 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:24:01.218334 kernel: audit: type=1106 audit(1765560241.210:760): pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:01.218384 kernel: audit: type=1104 audit(1765560241.210:761): pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:01.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.252:22-139.178.89.65:55924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.221290 systemd-logind[1663]: Removed session 9. Dec 12 17:24:01.377968 systemd[1]: Started sshd@9-10.0.6.252:22-139.178.89.65:46220.service - OpenSSH per-connection server daemon (139.178.89.65:46220). Dec 12 17:24:01.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.252:22-139.178.89.65:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:01.510023 containerd[1697]: time="2025-12-12T17:24:01.509886730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:24:01.851192 containerd[1697]: time="2025-12-12T17:24:01.851151377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:24:01.852810 containerd[1697]: time="2025-12-12T17:24:01.852759861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:24:01.852972 containerd[1697]: time="2025-12-12T17:24:01.852848981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:24:01.853003 kubelet[2889]: E1212 17:24:01.852968 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:24:01.853246 kubelet[2889]: E1212 17:24:01.853013 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:24:01.853246 kubelet[2889]: E1212 17:24:01.853137 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hffrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:24:01.854576 kubelet[2889]: E1212 17:24:01.854533 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:24:02.200000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.201452 sshd[5248]: Accepted publickey for core from 139.178.89.65 port 46220 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:02.202000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.202000 audit[5248]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5faca50 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:02.202000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:02.203030 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:02.209464 systemd-logind[1663]: New session 10 of user core. Dec 12 17:24:02.213594 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:24:02.216000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.218000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.508288 kubelet[2889]: E1212 17:24:02.506711 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:24:02.508288 kubelet[2889]: E1212 17:24:02.507296 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:24:02.810507 sshd[5251]: Connection closed by 139.178.89.65 port 46220 Dec 12 17:24:02.810629 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:02.812000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.812000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:02.815838 systemd-logind[1663]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:24:02.816236 systemd[1]: sshd@9-10.0.6.252:22-139.178.89.65:46220.service: Deactivated successfully. Dec 12 17:24:02.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.252:22-139.178.89.65:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:02.819343 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:24:02.821245 systemd-logind[1663]: Removed session 10. Dec 12 17:24:02.981416 systemd[1]: Started sshd@10-10.0.6.252:22-139.178.89.65:46232.service - OpenSSH per-connection server daemon (139.178.89.65:46232). Dec 12 17:24:02.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.252:22-139.178.89.65:46232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:03.821000 audit[5263]: USER_ACCT pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:03.821810 sshd[5263]: Accepted publickey for core from 139.178.89.65 port 46232 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:03.822000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:03.822000 audit[5263]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe33f5cb0 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:03.822000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:03.823218 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:03.829446 systemd-logind[1663]: New session 11 of user core. Dec 12 17:24:03.838593 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:24:03.843000 audit[5263]: USER_START pid=5263 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:03.844000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:04.365914 sshd[5266]: Connection closed by 139.178.89.65 port 46232 Dec 12 17:24:04.366539 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:04.368000 audit[5263]: USER_END pid=5263 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:04.369000 audit[5263]: CRED_DISP pid=5263 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:04.372003 systemd[1]: sshd@10-10.0.6.252:22-139.178.89.65:46232.service: Deactivated successfully. Dec 12 17:24:04.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.252:22-139.178.89.65:46232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:04.374833 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:24:04.377395 systemd-logind[1663]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:24:04.379514 systemd-logind[1663]: Removed session 11. Dec 12 17:24:08.507418 kubelet[2889]: E1212 17:24:08.507184 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:24:09.509966 kubelet[2889]: E1212 17:24:09.509437 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:24:09.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.252:22-139.178.89.65:46238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:09.532618 systemd[1]: Started sshd@11-10.0.6.252:22-139.178.89.65:46238.service - OpenSSH per-connection server daemon (139.178.89.65:46238). Dec 12 17:24:09.536358 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:24:09.536462 kernel: audit: type=1130 audit(1765560249.532:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.252:22-139.178.89.65:46238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:10.352000 audit[5285]: USER_ACCT pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.354675 sshd[5285]: Accepted publickey for core from 139.178.89.65 port 46238 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:10.357432 kernel: audit: type=1101 audit(1765560250.352:782): pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.357000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.358391 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:10.363155 kernel: audit: type=1103 audit(1765560250.357:783): pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.363211 kernel: audit: type=1006 audit(1765560250.357:784): pid=5285 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 17:24:10.357000 audit[5285]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe67a0770 a2=3 a3=0 items=0 ppid=1 pid=5285 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:10.366873 kernel: audit: type=1300 audit(1765560250.357:784): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe67a0770 a2=3 a3=0 items=0 ppid=1 pid=5285 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:10.366998 kernel: audit: type=1327 audit(1765560250.357:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:10.357000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:10.372252 systemd-logind[1663]: New session 12 of user core. Dec 12 17:24:10.383686 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:24:10.386000 audit[5285]: USER_START pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.390000 audit[5288]: CRED_ACQ pid=5288 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.394436 kernel: audit: type=1105 audit(1765560250.386:785): pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.394611 kernel: audit: type=1103 audit(1765560250.390:786): pid=5288 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.891611 sshd[5288]: Connection closed by 139.178.89.65 port 46238 Dec 12 17:24:10.892322 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:10.894000 audit[5285]: USER_END pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.898079 systemd[1]: sshd@11-10.0.6.252:22-139.178.89.65:46238.service: Deactivated successfully. Dec 12 17:24:10.901063 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:24:10.894000 audit[5285]: CRED_DISP pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.903179 systemd-logind[1663]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:24:10.904226 systemd-logind[1663]: Removed session 12. Dec 12 17:24:10.905580 kernel: audit: type=1106 audit(1765560250.894:787): pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.905642 kernel: audit: type=1104 audit(1765560250.894:788): pid=5285 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:10.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.252:22-139.178.89.65:46238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:13.508118 kubelet[2889]: E1212 17:24:13.507990 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:24:15.507944 kubelet[2889]: E1212 17:24:15.507894 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:24:16.060137 systemd[1]: Started sshd@12-10.0.6.252:22-139.178.89.65:38268.service - OpenSSH per-connection server daemon (139.178.89.65:38268). Dec 12 17:24:16.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.252:22-139.178.89.65:38268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:16.061133 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:24:16.061196 kernel: audit: type=1130 audit(1765560256.059:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.252:22-139.178.89.65:38268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:16.883000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.884505 sshd[5301]: Accepted publickey for core from 139.178.89.65 port 38268 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:16.886752 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:16.885000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.892511 kernel: audit: type=1101 audit(1765560256.883:791): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.892586 kernel: audit: type=1103 audit(1765560256.885:792): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.892636 kernel: audit: type=1006 audit(1765560256.886:793): pid=5301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:24:16.886000 audit[5301]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc89c860 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:16.897965 kernel: audit: type=1300 audit(1765560256.886:793): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc89c860 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:16.898061 kernel: audit: type=1327 audit(1765560256.886:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:16.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:16.898172 systemd-logind[1663]: New session 13 of user core. Dec 12 17:24:16.906879 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:24:16.911000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.914000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.920277 kernel: audit: type=1105 audit(1765560256.911:794): pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:16.920380 kernel: audit: type=1103 audit(1765560256.914:795): pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:17.423541 sshd[5304]: Connection closed by 139.178.89.65 port 38268 Dec 12 17:24:17.424624 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:17.425000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:17.429261 systemd-logind[1663]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:24:17.429429 systemd[1]: sshd@12-10.0.6.252:22-139.178.89.65:38268.service: Deactivated successfully. Dec 12 17:24:17.425000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:17.431460 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:24:17.432977 kernel: audit: type=1106 audit(1765560257.425:796): pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:17.433075 kernel: audit: type=1104 audit(1765560257.425:797): pid=5301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:17.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.252:22-139.178.89.65:38268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:17.433561 systemd-logind[1663]: Removed session 13. Dec 12 17:24:17.508674 kubelet[2889]: E1212 17:24:17.507735 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:24:17.509587 kubelet[2889]: E1212 17:24:17.509516 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:24:19.508554 kubelet[2889]: E1212 17:24:19.508500 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:24:22.507972 kubelet[2889]: E1212 17:24:22.507617 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:24:22.590313 systemd[1]: Started sshd@13-10.0.6.252:22-139.178.89.65:38704.service - OpenSSH per-connection server daemon (139.178.89.65:38704). Dec 12 17:24:22.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.252:22-139.178.89.65:38704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:22.591464 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:24:22.591563 kernel: audit: type=1130 audit(1765560262.590:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.252:22-139.178.89.65:38704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:23.419557 sshd[5344]: Accepted publickey for core from 139.178.89.65 port 38704 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:23.419000 audit[5344]: USER_ACCT pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.422618 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:23.421000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.427302 kernel: audit: type=1101 audit(1765560263.419:800): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.427378 kernel: audit: type=1103 audit(1765560263.421:801): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.429513 kernel: audit: type=1006 audit(1765560263.421:802): pid=5344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:24:23.421000 audit[5344]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff993a9a0 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:23.433386 kernel: audit: type=1300 audit(1765560263.421:802): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff993a9a0 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:23.421000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:23.435033 kernel: audit: type=1327 audit(1765560263.421:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:23.437153 systemd-logind[1663]: New session 14 of user core. Dec 12 17:24:23.450178 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:24:23.452000 audit[5344]: USER_START pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.453000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.459766 kernel: audit: type=1105 audit(1765560263.452:803): pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.459855 kernel: audit: type=1103 audit(1765560263.453:804): pid=5347 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.963651 sshd[5347]: Connection closed by 139.178.89.65 port 38704 Dec 12 17:24:23.964086 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:23.964000 audit[5344]: USER_END pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.967876 systemd[1]: sshd@13-10.0.6.252:22-139.178.89.65:38704.service: Deactivated successfully. Dec 12 17:24:23.965000 audit[5344]: CRED_DISP pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.969584 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:24:23.970863 systemd-logind[1663]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:24:23.971783 systemd-logind[1663]: Removed session 14. Dec 12 17:24:23.972443 kernel: audit: type=1106 audit(1765560263.964:805): pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.972519 kernel: audit: type=1104 audit(1765560263.965:806): pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:23.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.252:22-139.178.89.65:38704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:24.131092 systemd[1]: Started sshd@14-10.0.6.252:22-139.178.89.65:38710.service - OpenSSH per-connection server daemon (139.178.89.65:38710). Dec 12 17:24:24.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.252:22-139.178.89.65:38710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:24.950000 audit[5361]: USER_ACCT pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:24.951590 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 38710 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:24.951000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:24.951000 audit[5361]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff55d5ad0 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:24.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:24.952056 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:24.960673 systemd-logind[1663]: New session 15 of user core. Dec 12 17:24:24.966628 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:24:24.968000 audit[5361]: USER_START pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:24.970000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:25.550580 sshd[5364]: Connection closed by 139.178.89.65 port 38710 Dec 12 17:24:25.551041 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:25.552000 audit[5361]: USER_END pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:25.552000 audit[5361]: CRED_DISP pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:25.556475 systemd-logind[1663]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:24:25.556667 systemd[1]: sshd@14-10.0.6.252:22-139.178.89.65:38710.service: Deactivated successfully. Dec 12 17:24:25.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.252:22-139.178.89.65:38710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:25.559588 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:24:25.562015 systemd-logind[1663]: Removed session 15. Dec 12 17:24:25.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.252:22-139.178.89.65:38722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:25.724169 systemd[1]: Started sshd@15-10.0.6.252:22-139.178.89.65:38722.service - OpenSSH per-connection server daemon (139.178.89.65:38722). Dec 12 17:24:26.507203 kubelet[2889]: E1212 17:24:26.507159 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:24:26.563000 audit[5375]: USER_ACCT pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:26.565147 sshd[5375]: Accepted publickey for core from 139.178.89.65 port 38722 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:26.565000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:26.565000 audit[5375]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffae62470 a2=3 a3=0 items=0 ppid=1 pid=5375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:26.565000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:26.566450 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:26.571214 systemd-logind[1663]: New session 16 of user core. Dec 12 17:24:26.580661 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:24:26.582000 audit[5375]: USER_START pid=5375 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:26.584000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:27.509000 audit[5390]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:27.509000 audit[5390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd3092f10 a2=0 a3=1 items=0 ppid=2997 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:27.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:27.515000 audit[5390]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:27.515000 audit[5390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd3092f10 a2=0 a3=1 items=0 ppid=2997 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:27.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:27.533000 audit[5392]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:27.533000 audit[5392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdcd15f10 a2=0 a3=1 items=0 ppid=2997 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:27.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:27.541000 audit[5392]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:27.541000 audit[5392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdcd15f10 a2=0 a3=1 items=0 ppid=2997 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:27.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:27.664463 sshd[5378]: Connection closed by 139.178.89.65 port 38722 Dec 12 17:24:27.664352 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:27.667000 audit[5375]: USER_END pid=5375 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:27.674213 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:24:27.674339 kernel: audit: type=1106 audit(1765560267.667:827): pid=5375 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:27.670317 systemd[1]: sshd@15-10.0.6.252:22-139.178.89.65:38722.service: Deactivated successfully. Dec 12 17:24:27.667000 audit[5375]: CRED_DISP pid=5375 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:27.679411 kernel: audit: type=1104 audit(1765560267.667:828): pid=5375 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:27.675906 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:24:27.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.252:22-139.178.89.65:38722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:27.680236 systemd-logind[1663]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:24:27.685438 kernel: audit: type=1131 audit(1765560267.670:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.252:22-139.178.89.65:38722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:27.685859 systemd-logind[1663]: Removed session 16. Dec 12 17:24:27.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.252:22-139.178.89.65:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:27.832907 systemd[1]: Started sshd@16-10.0.6.252:22-139.178.89.65:38732.service - OpenSSH per-connection server daemon (139.178.89.65:38732). Dec 12 17:24:27.840440 kernel: audit: type=1130 audit(1765560267.832:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.252:22-139.178.89.65:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:28.506622 kubelet[2889]: E1212 17:24:28.506551 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:24:28.507414 kubelet[2889]: E1212 17:24:28.507352 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:24:28.673000 audit[5397]: USER_ACCT pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.673979 sshd[5397]: Accepted publickey for core from 139.178.89.65 port 38732 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:28.677153 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:28.676000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.682328 kernel: audit: type=1101 audit(1765560268.673:831): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.682426 kernel: audit: type=1103 audit(1765560268.676:832): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.682465 kernel: audit: type=1006 audit(1765560268.676:833): pid=5397 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 17:24:28.686021 kernel: audit: type=1300 audit(1765560268.676:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcde9c560 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:28.676000 audit[5397]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcde9c560 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:28.686417 systemd-logind[1663]: New session 17 of user core. Dec 12 17:24:28.676000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:28.690646 kernel: audit: type=1327 audit(1765560268.676:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:28.692654 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:24:28.694000 audit[5397]: USER_START pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.699505 kernel: audit: type=1105 audit(1765560268.694:834): pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:28.699000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:29.322256 sshd[5400]: Connection closed by 139.178.89.65 port 38732 Dec 12 17:24:29.322887 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:29.325000 audit[5397]: USER_END pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:29.325000 audit[5397]: CRED_DISP pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:29.330684 systemd-logind[1663]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:24:29.330894 systemd[1]: sshd@16-10.0.6.252:22-139.178.89.65:38732.service: Deactivated successfully. Dec 12 17:24:29.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.252:22-139.178.89.65:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.334377 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:24:29.338646 systemd-logind[1663]: Removed session 17. Dec 12 17:24:29.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.252:22-139.178.89.65:38748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.511070 systemd[1]: Started sshd@17-10.0.6.252:22-139.178.89.65:38748.service - OpenSSH per-connection server daemon (139.178.89.65:38748). Dec 12 17:24:30.365000 audit[5411]: USER_ACCT pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.366288 sshd[5411]: Accepted publickey for core from 139.178.89.65 port 38748 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:30.366000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.366000 audit[5411]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0f52ed0 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:30.366000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:30.367583 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:30.372348 systemd-logind[1663]: New session 18 of user core. Dec 12 17:24:30.379701 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:24:30.381000 audit[5411]: USER_START pid=5411 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.382000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.913748 sshd[5414]: Connection closed by 139.178.89.65 port 38748 Dec 12 17:24:30.913926 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:30.914000 audit[5411]: USER_END pid=5411 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.915000 audit[5411]: CRED_DISP pid=5411 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:30.918112 systemd[1]: sshd@17-10.0.6.252:22-139.178.89.65:38748.service: Deactivated successfully. Dec 12 17:24:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.252:22-139.178.89.65:38748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.919934 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:24:30.921096 systemd-logind[1663]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:24:30.922675 systemd-logind[1663]: Removed session 18. Dec 12 17:24:31.045000 audit[5427]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5427 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:31.045000 audit[5427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe43cde70 a2=0 a3=1 items=0 ppid=2997 pid=5427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:31.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:31.053000 audit[5427]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5427 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:24:31.053000 audit[5427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe43cde70 a2=0 a3=1 items=0 ppid=2997 pid=5427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:31.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:24:31.510140 kubelet[2889]: E1212 17:24:31.510081 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:24:34.506639 kubelet[2889]: E1212 17:24:34.506549 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:24:36.083667 systemd[1]: Started sshd@18-10.0.6.252:22-139.178.89.65:41176.service - OpenSSH per-connection server daemon (139.178.89.65:41176). Dec 12 17:24:36.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.252:22-139.178.89.65:41176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.084487 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 17:24:36.084547 kernel: audit: type=1130 audit(1765560276.083:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.252:22-139.178.89.65:41176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.507347 kubelet[2889]: E1212 17:24:36.507285 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:24:36.907000 audit[5431]: USER_ACCT pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.908294 sshd[5431]: Accepted publickey for core from 139.178.89.65 port 41176 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:36.912423 kernel: audit: type=1101 audit(1765560276.907:851): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.912000 audit[5431]: CRED_ACQ pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.915527 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:36.918832 kernel: audit: type=1103 audit(1765560276.912:852): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.918925 kernel: audit: type=1006 audit(1765560276.912:853): pid=5431 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 17:24:36.918958 kernel: audit: type=1300 audit(1765560276.912:853): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7a73f90 a2=3 a3=0 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:36.912000 audit[5431]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7a73f90 a2=3 a3=0 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:36.912000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:36.923677 kernel: audit: type=1327 audit(1765560276.912:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:36.926384 systemd-logind[1663]: New session 19 of user core. Dec 12 17:24:36.936606 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:24:36.938000 audit[5431]: USER_START pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.942000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.946272 kernel: audit: type=1105 audit(1765560276.938:854): pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:36.946486 kernel: audit: type=1103 audit(1765560276.942:855): pid=5434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:37.439075 sshd[5434]: Connection closed by 139.178.89.65 port 41176 Dec 12 17:24:37.439594 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:37.444000 audit[5431]: USER_END pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:37.451763 systemd[1]: sshd@18-10.0.6.252:22-139.178.89.65:41176.service: Deactivated successfully. Dec 12 17:24:37.453716 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:24:37.445000 audit[5431]: CRED_DISP pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:37.457121 kernel: audit: type=1106 audit(1765560277.444:856): pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:37.457206 kernel: audit: type=1104 audit(1765560277.445:857): pid=5431 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:37.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.252:22-139.178.89.65:41176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:37.458832 systemd-logind[1663]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:24:37.461318 systemd-logind[1663]: Removed session 19. Dec 12 17:24:39.510625 kubelet[2889]: E1212 17:24:39.510557 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:24:41.507976 kubelet[2889]: E1212 17:24:41.507898 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:24:42.508006 kubelet[2889]: E1212 17:24:42.507958 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:24:42.608935 systemd[1]: Started sshd@19-10.0.6.252:22-139.178.89.65:51138.service - OpenSSH per-connection server daemon (139.178.89.65:51138). Dec 12 17:24:42.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.252:22-139.178.89.65:51138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:42.610737 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:24:42.610812 kernel: audit: type=1130 audit(1765560282.608:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.252:22-139.178.89.65:51138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.434000 audit[5450]: USER_ACCT pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.434949 sshd[5450]: Accepted publickey for core from 139.178.89.65 port 51138 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:43.437526 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:43.436000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.441731 kernel: audit: type=1101 audit(1765560283.434:860): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.441876 kernel: audit: type=1103 audit(1765560283.436:861): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.441941 kernel: audit: type=1006 audit(1765560283.436:862): pid=5450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 12 17:24:43.436000 audit[5450]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe23946b0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:43.447684 kernel: audit: type=1300 audit(1765560283.436:862): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe23946b0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:43.448114 kernel: audit: type=1327 audit(1765560283.436:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:43.436000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:43.452927 systemd-logind[1663]: New session 20 of user core. Dec 12 17:24:43.466628 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:24:43.468000 audit[5450]: USER_START pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.473441 kernel: audit: type=1105 audit(1765560283.468:863): pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.473000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.477434 kernel: audit: type=1103 audit(1765560283.473:864): pid=5453 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:43.509665 kubelet[2889]: E1212 17:24:43.509572 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:24:44.003095 sshd[5453]: Connection closed by 139.178.89.65 port 51138 Dec 12 17:24:44.003458 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:44.005000 audit[5450]: USER_END pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:44.008340 systemd[1]: sshd@19-10.0.6.252:22-139.178.89.65:51138.service: Deactivated successfully. Dec 12 17:24:44.005000 audit[5450]: CRED_DISP pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:44.014120 kernel: audit: type=1106 audit(1765560284.005:865): pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:44.014196 kernel: audit: type=1104 audit(1765560284.005:866): pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:44.011508 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:24:44.013304 systemd-logind[1663]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:24:44.014133 systemd-logind[1663]: Removed session 20. Dec 12 17:24:44.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.252:22-139.178.89.65:51138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:47.506544 kubelet[2889]: E1212 17:24:47.506499 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:24:48.507557 kubelet[2889]: E1212 17:24:48.507495 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:24:49.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.252:22-139.178.89.65:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:49.172820 systemd[1]: Started sshd@20-10.0.6.252:22-139.178.89.65:51148.service - OpenSSH per-connection server daemon (139.178.89.65:51148). Dec 12 17:24:49.173765 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:24:49.173789 kernel: audit: type=1130 audit(1765560289.172:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.252:22-139.178.89.65:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:49.996000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:49.997099 sshd[5466]: Accepted publickey for core from 139.178.89.65 port 51148 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:50.001511 kernel: audit: type=1101 audit(1765560289.996:869): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.005756 kernel: audit: type=1103 audit(1765560290.001:870): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.001000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.002166 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:50.009280 kernel: audit: type=1006 audit(1765560290.001:871): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:24:50.001000 audit[5466]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffed3940 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:50.013394 kernel: audit: type=1300 audit(1765560290.001:871): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffed3940 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:50.001000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:50.015128 kernel: audit: type=1327 audit(1765560290.001:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:50.015993 systemd-logind[1663]: New session 21 of user core. Dec 12 17:24:50.031945 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:24:50.034000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.038000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.042298 kernel: audit: type=1105 audit(1765560290.034:872): pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.042371 kernel: audit: type=1103 audit(1765560290.038:873): pid=5493 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.507287 kubelet[2889]: E1212 17:24:50.507236 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:24:50.532359 sshd[5493]: Connection closed by 139.178.89.65 port 51148 Dec 12 17:24:50.532952 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:50.534000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.538090 systemd[1]: sshd@20-10.0.6.252:22-139.178.89.65:51148.service: Deactivated successfully. Dec 12 17:24:50.535000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.541077 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:24:50.541880 systemd-logind[1663]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:24:50.542781 kernel: audit: type=1106 audit(1765560290.534:874): pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.542843 kernel: audit: type=1104 audit(1765560290.535:875): pid=5466 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:50.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.252:22-139.178.89.65:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:50.544054 systemd-logind[1663]: Removed session 21. Dec 12 17:24:55.508083 kubelet[2889]: E1212 17:24:55.507535 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:24:55.713587 systemd[1]: Started sshd@21-10.0.6.252:22-139.178.89.65:37594.service - OpenSSH per-connection server daemon (139.178.89.65:37594). Dec 12 17:24:55.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.252:22-139.178.89.65:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.715003 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:24:55.715060 kernel: audit: type=1130 audit(1765560295.713:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.252:22-139.178.89.65:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:56.508265 kubelet[2889]: E1212 17:24:56.507035 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:24:56.508265 kubelet[2889]: E1212 17:24:56.507672 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:24:56.567993 sshd[5506]: Accepted publickey for core from 139.178.89.65 port 37594 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:24:56.567000 audit[5506]: USER_ACCT pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.571000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.573713 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:24:56.575421 kernel: audit: type=1101 audit(1765560296.567:878): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.575474 kernel: audit: type=1103 audit(1765560296.571:879): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.577425 kernel: audit: type=1006 audit(1765560296.572:880): pid=5506 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:24:56.577535 kernel: audit: type=1300 audit(1765560296.572:880): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2bb4840 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:56.572000 audit[5506]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2bb4840 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:56.572000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:56.582403 kernel: audit: type=1327 audit(1765560296.572:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:24:56.586452 systemd-logind[1663]: New session 22 of user core. Dec 12 17:24:56.595620 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:24:56.599000 audit[5506]: USER_START pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.601000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.607183 kernel: audit: type=1105 audit(1765560296.599:881): pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:56.607261 kernel: audit: type=1103 audit(1765560296.601:882): pid=5510 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:57.120600 sshd[5510]: Connection closed by 139.178.89.65 port 37594 Dec 12 17:24:57.120924 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Dec 12 17:24:57.122000 audit[5506]: USER_END pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:57.126347 systemd-logind[1663]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:24:57.126592 systemd[1]: sshd@21-10.0.6.252:22-139.178.89.65:37594.service: Deactivated successfully. Dec 12 17:24:57.122000 audit[5506]: CRED_DISP pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:57.129446 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:24:57.131329 kernel: audit: type=1106 audit(1765560297.122:883): pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:57.131386 kernel: audit: type=1104 audit(1765560297.122:884): pid=5506 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:24:57.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.252:22-139.178.89.65:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.134043 systemd-logind[1663]: Removed session 22. Dec 12 17:25:01.507788 kubelet[2889]: E1212 17:25:01.507609 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:25:01.508507 kubelet[2889]: E1212 17:25:01.507860 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:25:01.508507 kubelet[2889]: E1212 17:25:01.507923 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:25:02.281903 systemd[1]: Started sshd@22-10.0.6.252:22-139.178.89.65:50452.service - OpenSSH per-connection server daemon (139.178.89.65:50452). Dec 12 17:25:02.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.252:22-139.178.89.65:50452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:02.283190 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:25:02.283261 kernel: audit: type=1130 audit(1765560302.281:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.252:22-139.178.89.65:50452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:03.104000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.106927 sshd[5530]: Accepted publickey for core from 139.178.89.65 port 50452 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:03.107793 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:03.107000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.113552 kernel: audit: type=1101 audit(1765560303.104:887): pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.113634 kernel: audit: type=1103 audit(1765560303.107:888): pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.113657 kernel: audit: type=1006 audit(1765560303.107:889): pid=5530 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:25:03.113380 systemd-logind[1663]: New session 23 of user core. Dec 12 17:25:03.107000 audit[5530]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdda10e80 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.120444 kernel: audit: type=1300 audit(1765560303.107:889): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdda10e80 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:03.120554 kernel: audit: type=1327 audit(1765560303.107:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:25:03.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:25:03.124670 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:25:03.128000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.128000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.137037 kernel: audit: type=1105 audit(1765560303.128:890): pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.137122 kernel: audit: type=1103 audit(1765560303.128:891): pid=5533 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.639709 sshd[5533]: Connection closed by 139.178.89.65 port 50452 Dec 12 17:25:03.640642 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:03.642000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.647263 systemd[1]: sshd@22-10.0.6.252:22-139.178.89.65:50452.service: Deactivated successfully. Dec 12 17:25:03.643000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.648220 systemd-logind[1663]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:25:03.650025 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:25:03.651541 kernel: audit: type=1106 audit(1765560303.642:892): pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.651612 kernel: audit: type=1104 audit(1765560303.643:893): pid=5530 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:03.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.252:22-139.178.89.65:50452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:03.654326 systemd-logind[1663]: Removed session 23. Dec 12 17:25:06.507558 kubelet[2889]: E1212 17:25:06.507500 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:25:10.506937 kubelet[2889]: E1212 17:25:10.506598 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:25:10.507579 containerd[1697]: time="2025-12-12T17:25:10.507433440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:25:10.851896 containerd[1697]: time="2025-12-12T17:25:10.851581854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:10.856374 containerd[1697]: time="2025-12-12T17:25:10.856309666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:25:10.856523 containerd[1697]: time="2025-12-12T17:25:10.856430226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:10.856623 kubelet[2889]: E1212 17:25:10.856582 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:25:10.856687 kubelet[2889]: E1212 17:25:10.856635 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:25:10.856803 kubelet[2889]: E1212 17:25:10.856754 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:036088a0863247c4915e9fb15ee70601,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:10.858746 containerd[1697]: time="2025-12-12T17:25:10.858721312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:25:11.188419 containerd[1697]: time="2025-12-12T17:25:11.188049688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:11.196564 containerd[1697]: time="2025-12-12T17:25:11.196489110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:25:11.196699 containerd[1697]: time="2025-12-12T17:25:11.196627670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:11.196816 kubelet[2889]: E1212 17:25:11.196773 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:25:11.196876 kubelet[2889]: E1212 17:25:11.196824 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:25:11.196975 kubelet[2889]: E1212 17:25:11.196933 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvqfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58c79b5c7c-dvc69_calico-system(785c11bf-7217-4d42-afaf-b9c091f491b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:11.198545 kubelet[2889]: E1212 17:25:11.198504 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:25:13.507044 kubelet[2889]: E1212 17:25:13.506980 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:25:13.508523 kubelet[2889]: E1212 17:25:13.507281 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:25:15.509650 containerd[1697]: time="2025-12-12T17:25:15.509608434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:25:15.857597 containerd[1697]: time="2025-12-12T17:25:15.857550897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:15.863685 containerd[1697]: time="2025-12-12T17:25:15.863587273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:25:15.863794 containerd[1697]: time="2025-12-12T17:25:15.863660113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:15.864165 kubelet[2889]: E1212 17:25:15.863903 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:25:15.864165 kubelet[2889]: E1212 17:25:15.863960 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:25:15.864165 kubelet[2889]: E1212 17:25:15.864091 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24jgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-696f66b658-n7nn2_calico-system(f675d505-5319-47a1-bc86-409be66cd047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:15.865313 kubelet[2889]: E1212 17:25:15.865256 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:25:19.508025 containerd[1697]: time="2025-12-12T17:25:19.507784539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:25:19.832135 containerd[1697]: time="2025-12-12T17:25:19.831989942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:19.833749 containerd[1697]: time="2025-12-12T17:25:19.833646666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:25:19.833749 containerd[1697]: time="2025-12-12T17:25:19.833675866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:19.833914 kubelet[2889]: E1212 17:25:19.833876 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:25:19.834207 kubelet[2889]: E1212 17:25:19.833925 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:25:19.834207 kubelet[2889]: E1212 17:25:19.834043 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:19.836164 containerd[1697]: time="2025-12-12T17:25:19.836021552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:25:20.175024 containerd[1697]: time="2025-12-12T17:25:20.174923272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:20.176835 containerd[1697]: time="2025-12-12T17:25:20.176772837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:25:20.176969 containerd[1697]: time="2025-12-12T17:25:20.176812117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:20.177076 kubelet[2889]: E1212 17:25:20.177010 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:25:20.177076 kubelet[2889]: E1212 17:25:20.177063 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:25:20.177232 kubelet[2889]: E1212 17:25:20.177183 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrczb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-79stw_calico-system(3fd99a4f-5151-4d6d-a968-dc993caff3f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:20.178692 kubelet[2889]: E1212 17:25:20.178561 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:25:21.509938 containerd[1697]: time="2025-12-12T17:25:21.509896140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:25:21.841377 containerd[1697]: time="2025-12-12T17:25:21.841218321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:21.843430 containerd[1697]: time="2025-12-12T17:25:21.843298366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:25:21.843430 containerd[1697]: time="2025-12-12T17:25:21.843372726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:21.843619 kubelet[2889]: E1212 17:25:21.843551 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:21.843619 kubelet[2889]: E1212 17:25:21.843596 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:21.844189 kubelet[2889]: E1212 17:25:21.843845 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xpg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-5d5jh_calico-apiserver(51905ae7-f059-4931-8cc7-e32bc90c24e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:21.845052 kubelet[2889]: E1212 17:25:21.844994 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:25:24.507339 kubelet[2889]: E1212 17:25:24.507281 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:25:27.507422 kubelet[2889]: E1212 17:25:27.507351 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:25:28.506886 containerd[1697]: time="2025-12-12T17:25:28.506834076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:25:28.829657 containerd[1697]: time="2025-12-12T17:25:28.829520194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:28.830930 containerd[1697]: time="2025-12-12T17:25:28.830887397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:25:28.831016 containerd[1697]: time="2025-12-12T17:25:28.830968158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:28.831146 kubelet[2889]: E1212 17:25:28.831109 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:25:28.831446 kubelet[2889]: E1212 17:25:28.831160 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:25:28.831492 kubelet[2889]: E1212 17:25:28.831448 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hffrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nzjsj_calico-system(0bdf7dfd-6bce-4744-a930-376661816277): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:28.831578 containerd[1697]: time="2025-12-12T17:25:28.831550399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:25:28.833178 kubelet[2889]: E1212 17:25:28.832837 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:25:29.157295 containerd[1697]: time="2025-12-12T17:25:29.157185725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:25:29.158454 containerd[1697]: time="2025-12-12T17:25:29.158412248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:25:29.158594 containerd[1697]: time="2025-12-12T17:25:29.158418048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:25:29.158656 kubelet[2889]: E1212 17:25:29.158620 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:29.158704 kubelet[2889]: E1212 17:25:29.158668 2889 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:25:29.158882 kubelet[2889]: E1212 17:25:29.158795 2889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrqmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-878796b8-spr8v_calico-apiserver(6e7780bc-34b6-4688-ae6a-fbd80527fba7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:25:29.160027 kubelet[2889]: E1212 17:25:29.159974 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:25:30.507442 kubelet[2889]: E1212 17:25:30.507376 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-79stw" podUID="3fd99a4f-5151-4d6d-a968-dc993caff3f6" Dec 12 17:25:32.414052 systemd[1]: cri-containerd-6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15.scope: Deactivated successfully. Dec 12 17:25:32.414816 systemd[1]: cri-containerd-6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15.scope: Consumed 5.366s CPU time, 60.2M memory peak. Dec 12 17:25:32.415000 audit: BPF prog-id=256 op=LOAD Dec 12 17:25:32.418812 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:25:32.418882 kernel: audit: type=1334 audit(1765560332.415:895): prog-id=256 op=LOAD Dec 12 17:25:32.418900 kernel: audit: type=1334 audit(1765560332.415:896): prog-id=88 op=UNLOAD Dec 12 17:25:32.415000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:25:32.418963 containerd[1697]: time="2025-12-12T17:25:32.416177151Z" level=info msg="received container exit event container_id:\"6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15\" id:\"6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15\" pid:2725 exit_status:1 exited_at:{seconds:1765560332 nanos:415557749}" Dec 12 17:25:32.422000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:25:32.422000 audit: BPF prog-id=112 op=UNLOAD Dec 12 17:25:32.424948 kernel: audit: type=1334 audit(1765560332.422:897): prog-id=108 op=UNLOAD Dec 12 17:25:32.425089 kernel: audit: type=1334 audit(1765560332.422:898): prog-id=112 op=UNLOAD Dec 12 17:25:32.438551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15-rootfs.mount: Deactivated successfully. Dec 12 17:25:32.861330 kubelet[2889]: E1212 17:25:32.861275 2889 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.252:51490->10.0.6.248:2379: read: connection timed out" Dec 12 17:25:33.083196 kubelet[2889]: I1212 17:25:33.083166 2889 scope.go:117] "RemoveContainer" containerID="6ad973f82552620d0cc7c9497f740e14455dcab9274ba8e81d67243264081b15" Dec 12 17:25:33.084917 containerd[1697]: time="2025-12-12T17:25:33.084880728Z" level=info msg="CreateContainer within sandbox \"0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:25:33.096655 containerd[1697]: time="2025-12-12T17:25:33.096605118Z" level=info msg="Container 7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:33.104771 containerd[1697]: time="2025-12-12T17:25:33.104718419Z" level=info msg="CreateContainer within sandbox \"0362bab8d83d63bcb417a29f9360ad26cece67ed00e361a23fd31a10add80c96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e\"" Dec 12 17:25:33.105470 containerd[1697]: time="2025-12-12T17:25:33.105431781Z" level=info msg="StartContainer for \"7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e\"" Dec 12 17:25:33.106734 containerd[1697]: time="2025-12-12T17:25:33.106698264Z" level=info msg="connecting to shim 7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e" address="unix:///run/containerd/s/6a339dcefc961c0c352be0e9d20f78982844ad055521d915f3959269dd56b5e3" protocol=ttrpc version=3 Dec 12 17:25:33.145912 systemd[1]: Started cri-containerd-7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e.scope - libcontainer container 7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e. Dec 12 17:25:33.164597 kernel: audit: type=1334 audit(1765560333.157:899): prog-id=257 op=LOAD Dec 12 17:25:33.164714 kernel: audit: type=1334 audit(1765560333.157:900): prog-id=258 op=LOAD Dec 12 17:25:33.164736 kernel: audit: type=1300 audit(1765560333.157:900): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.157000 audit: BPF prog-id=257 op=LOAD Dec 12 17:25:33.157000 audit: BPF prog-id=258 op=LOAD Dec 12 17:25:33.157000 audit[5610]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.165481 kernel: audit: type=1327 audit(1765560333.157:900): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.171440 kernel: audit: type=1334 audit(1765560333.158:901): prog-id=258 op=UNLOAD Dec 12 17:25:33.171521 kernel: audit: type=1300 audit(1765560333.158:901): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.158000 audit: BPF prog-id=258 op=UNLOAD Dec 12 17:25:33.158000 audit[5610]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.158000 audit: BPF prog-id=259 op=LOAD Dec 12 17:25:33.158000 audit[5610]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.159000 audit: BPF prog-id=260 op=LOAD Dec 12 17:25:33.159000 audit[5610]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.163000 audit: BPF prog-id=260 op=UNLOAD Dec 12 17:25:33.163000 audit[5610]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.163000 audit: BPF prog-id=259 op=UNLOAD Dec 12 17:25:33.163000 audit[5610]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.164000 audit: BPF prog-id=261 op=LOAD Dec 12 17:25:33.164000 audit[5610]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2579 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:33.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343965393636333266643638386436323036623636636333616336 Dec 12 17:25:33.195339 containerd[1697]: time="2025-12-12T17:25:33.195300415Z" level=info msg="StartContainer for \"7849e96632fd688d6206b66cc3ac685f4d1a9dd63ff4292eb02f9a3c1c0d3a5e\" returns successfully" Dec 12 17:25:33.375638 systemd[1]: cri-containerd-ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40.scope: Deactivated successfully. Dec 12 17:25:33.376073 systemd[1]: cri-containerd-ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40.scope: Consumed 36.620s CPU time, 123.7M memory peak. Dec 12 17:25:33.378183 containerd[1697]: time="2025-12-12T17:25:33.378045049Z" level=info msg="received container exit event container_id:\"ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40\" id:\"ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40\" pid:3213 exit_status:1 exited_at:{seconds:1765560333 nanos:377742728}" Dec 12 17:25:33.380000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:25:33.380000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:25:33.438478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40-rootfs.mount: Deactivated successfully. Dec 12 17:25:34.088830 kubelet[2889]: I1212 17:25:34.088802 2889 scope.go:117] "RemoveContainer" containerID="ff83c15267662871ce874a391a8ad99b34dc958e844c871a4d28b00c41300e40" Dec 12 17:25:34.090778 containerd[1697]: time="2025-12-12T17:25:34.090311019Z" level=info msg="CreateContainer within sandbox \"28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:25:34.106040 containerd[1697]: time="2025-12-12T17:25:34.104700657Z" level=info msg="Container 0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:34.113656 containerd[1697]: time="2025-12-12T17:25:34.113617360Z" level=info msg="CreateContainer within sandbox \"28a6429585a7eafad46b7a902e4b929299fe31acf5567778fd7b9e388a90a772\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271\"" Dec 12 17:25:34.114499 containerd[1697]: time="2025-12-12T17:25:34.114429602Z" level=info msg="StartContainer for \"0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271\"" Dec 12 17:25:34.115653 containerd[1697]: time="2025-12-12T17:25:34.115552365Z" level=info msg="connecting to shim 0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271" address="unix:///run/containerd/s/183e350dc661b62f3d733e7feeae2cc4e06a4d2fe91f83f9b0a392c54089e2c9" protocol=ttrpc version=3 Dec 12 17:25:34.141814 systemd[1]: Started cri-containerd-0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271.scope - libcontainer container 0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271. Dec 12 17:25:34.151000 audit: BPF prog-id=262 op=LOAD Dec 12 17:25:34.152000 audit: BPF prog-id=263 op=LOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=263 op=UNLOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=264 op=LOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=265 op=LOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=265 op=UNLOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=264 op=UNLOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.152000 audit: BPF prog-id=266 op=LOAD Dec 12 17:25:34.152000 audit[5652]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3021 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:34.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333164643464343164313932316161656136633339663635643932 Dec 12 17:25:34.174607 containerd[1697]: time="2025-12-12T17:25:34.174565078Z" level=info msg="StartContainer for \"0231dd4d41d1921aaea6c39f65d92c9e73033b122866b8ad3b75d20e52c54271\" returns successfully" Dec 12 17:25:34.974014 kubelet[2889]: I1212 17:25:34.972848 2889 status_manager.go:890] "Failed to get status for pod" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" pod="calico-system/whisker-58c79b5c7c-dvc69" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.252:51392->10.0.6.248:2379: read: connection timed out" Dec 12 17:25:35.421788 kubelet[2889]: E1212 17:25:35.421589 2889 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.252:51260->10.0.6.248:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4515-1-0-8-acd31a5336.188087c2e0e4e693 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4515-1-0-8-acd31a5336,UID:a3b2bee5dd740c3a72ea0bffa8a511a7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-8-acd31a5336,},FirstTimestamp:2025-12-12 17:25:24.995737235 +0000 UTC m=+233.718785096,LastTimestamp:2025-12-12 17:25:24.995737235 +0000 UTC m=+233.718785096,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-8-acd31a5336,}" Dec 12 17:25:35.507279 kubelet[2889]: E1212 17:25:35.507212 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-5d5jh" podUID="51905ae7-f059-4931-8cc7-e32bc90c24e4" Dec 12 17:25:38.375234 systemd[1]: cri-containerd-9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9.scope: Deactivated successfully. Dec 12 17:25:38.376169 systemd[1]: cri-containerd-9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9.scope: Consumed 3.148s CPU time, 25.1M memory peak. Dec 12 17:25:38.376991 containerd[1697]: time="2025-12-12T17:25:38.376693674Z" level=info msg="received container exit event container_id:\"9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9\" id:\"9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9\" pid:2724 exit_status:1 exited_at:{seconds:1765560338 nanos:376323433}" Dec 12 17:25:38.378196 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 12 17:25:38.378257 kernel: audit: type=1334 audit(1765560338.376:917): prog-id=267 op=LOAD Dec 12 17:25:38.376000 audit: BPF prog-id=267 op=LOAD Dec 12 17:25:38.376000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:25:38.380037 kernel: audit: type=1334 audit(1765560338.376:918): prog-id=83 op=UNLOAD Dec 12 17:25:38.382000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:25:38.382000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:25:38.385200 kernel: audit: type=1334 audit(1765560338.382:919): prog-id=103 op=UNLOAD Dec 12 17:25:38.385258 kernel: audit: type=1334 audit(1765560338.382:920): prog-id=107 op=UNLOAD Dec 12 17:25:38.401446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9-rootfs.mount: Deactivated successfully. Dec 12 17:25:38.506622 kubelet[2889]: E1212 17:25:38.506555 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-696f66b658-n7nn2" podUID="f675d505-5319-47a1-bc86-409be66cd047" Dec 12 17:25:38.507477 kubelet[2889]: E1212 17:25:38.507428 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58c79b5c7c-dvc69" podUID="785c11bf-7217-4d42-afaf-b9c091f491b5" Dec 12 17:25:39.103349 kubelet[2889]: I1212 17:25:39.103306 2889 scope.go:117] "RemoveContainer" containerID="9e6b842e042e2c6e792aa566141f0dc8ced41205c436311218433632087738c9" Dec 12 17:25:39.105155 containerd[1697]: time="2025-12-12T17:25:39.105110286Z" level=info msg="CreateContainer within sandbox \"f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:25:39.115049 containerd[1697]: time="2025-12-12T17:25:39.114155390Z" level=info msg="Container ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:39.123722 containerd[1697]: time="2025-12-12T17:25:39.123593214Z" level=info msg="CreateContainer within sandbox \"f6d0dd80b2ff77660df9f47489cfcaf14a94b9900337876e98f9f2a494382f05\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341\"" Dec 12 17:25:39.124177 containerd[1697]: time="2025-12-12T17:25:39.124151495Z" level=info msg="StartContainer for \"ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341\"" Dec 12 17:25:39.125452 containerd[1697]: time="2025-12-12T17:25:39.125422859Z" level=info msg="connecting to shim ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341" address="unix:///run/containerd/s/b6f4576bac7c3c7178e273037d4ae9eba096bd6e22abc0e09aecc32cbabbcb7a" protocol=ttrpc version=3 Dec 12 17:25:39.144630 systemd[1]: Started cri-containerd-ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341.scope - libcontainer container ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341. Dec 12 17:25:39.155000 audit: BPF prog-id=268 op=LOAD Dec 12 17:25:39.156000 audit: BPF prog-id=269 op=LOAD Dec 12 17:25:39.157974 kernel: audit: type=1334 audit(1765560339.155:921): prog-id=268 op=LOAD Dec 12 17:25:39.158038 kernel: audit: type=1334 audit(1765560339.156:922): prog-id=269 op=LOAD Dec 12 17:25:39.158060 kernel: audit: type=1300 audit(1765560339.156:922): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.156000 audit[5702]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.165961 kernel: audit: type=1327 audit(1765560339.156:922): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.166067 kernel: audit: type=1334 audit(1765560339.156:923): prog-id=269 op=UNLOAD Dec 12 17:25:39.156000 audit: BPF prog-id=269 op=UNLOAD Dec 12 17:25:39.156000 audit[5702]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.170985 kernel: audit: type=1300 audit(1765560339.156:923): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.156000 audit: BPF prog-id=270 op=LOAD Dec 12 17:25:39.156000 audit[5702]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.157000 audit: BPF prog-id=271 op=LOAD Dec 12 17:25:39.157000 audit[5702]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.161000 audit: BPF prog-id=271 op=UNLOAD Dec 12 17:25:39.161000 audit[5702]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.161000 audit: BPF prog-id=270 op=UNLOAD Dec 12 17:25:39.161000 audit[5702]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.161000 audit: BPF prog-id=272 op=LOAD Dec 12 17:25:39.161000 audit[5702]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2551 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666616231646436376439653063393161356332663636323263613137 Dec 12 17:25:39.196618 containerd[1697]: time="2025-12-12T17:25:39.196575284Z" level=info msg="StartContainer for \"ffab1dd67d9e0c91a5c2f6622ca176bd5b76ab1f77fc67f81a8e68e498b21341\" returns successfully" Dec 12 17:25:40.507408 kubelet[2889]: E1212 17:25:40.507361 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-878796b8-spr8v" podUID="6e7780bc-34b6-4688-ae6a-fbd80527fba7" Dec 12 17:25:40.507765 kubelet[2889]: E1212 17:25:40.507366 2889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nzjsj" podUID="0bdf7dfd-6bce-4744-a930-376661816277" Dec 12 17:25:42.320449 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec